Most of us have have that “what would make a great movie scene” moments, where we had an idea about how something could or should happen on the big screen. Sometimes we even see them in movies. The problem is that they need to be strung together in a way that is coherent, and there is a reason for the actions you see to be, well, reasonable. There are FAR to many movies that are little more than an excuse for special effects and random events, with plot holes you could park a small mountain in.
I had a lot of these ideas, it seemed. I sort of started writing them down. I thought they were neat. But there was no pattern, no framework. Then one day I was talking to a neighbor that does some high-end software research and asked the question “what’s the biggest problem with Artificial Intelligence?”
His brief answer was “How do you shrink them?”
Um, huh? He clarified: how do you shrink, psychoanalyze, deal with mental problems, in a solid state, software-algorithm-based intelligence? When people have “mental breakdowns,” we give them drugs and/or send them to a psychiatrist. Maybe we lock them up if they are dangerous. But what do you do with an AI that freaks out? Pull the plug, switch their AC with their DC, give them more data or less, overclock, low-voltage them, reformat and reinstall, or…. what? And, what could an AI take that humans couldn’t, or vice-versa? From that came the idea of a PTSD military AI. Suddenly, a lot of possible pieces fell into place, and I started writing more seriously.