Modern Technology: Making Intelligence Less Artificial
Current AIs are idiot savants: brilliant at simpler tasks, such as playing Go or categorizing images, however, lack the generality and versatility to function fully in everyday life. While moments of frightening intelligence can shine through in the application of AI, many argue that we are still decades away from even the mercurial abilities of a five-year-antiquated girl. What’s cunning is that the obdurate parts of trying to erect AGI (a electronic computer as sharp as humans in general, not just at one narrow speciality) are not as intuitive as you’d expect they are.
…how are you liking my article so far? I’ll let you in on a secret: I didn’t write it. No, I didn’t plagiarize it either…sort of. The 100 or so words that you see above were written by a Generative Adversarial Network (GAN). A GAN is a form of AI that uses two neural nets, pitted against each other in an attempt for one to fool the other. While one tries to use data at its disposal to create increasingly more human-like outputs, the other neural net calls bullshit when it thinks the output is artificial, creating a feedback loop that gradually improves the quality of the network’s output. GANs can be used to write articles, create deep fake videos, generate an audio clip of your voice saying just about anything, and even create entirely new digital people.
This is both neat, but also a bit concerning. In an era of disinformation and digital bombardment, it means that it is now easier than ever to create and disseminate new content. All the while, you and I are none the wiser as to the origin of that content. Why is this potentially dangerous?
First, we have to acknowledge the shifts in employment this will create.
My extremely lucrative job as a volunteer writer for Modern Mississauga will likely no longer exist in ten years. Media personalities, news anchors, and even television celebrities may someday soon not be required to be actual people, but instead digital avatars that entertain and inform us. The music we listen to, art we observe, novels we read, and videos we watch will increasingly be produced not by people, but by AI.
And at the core of this evolution, lays the true danger. As ruthless and conniving as some people may be, they are still people and typically have some sense of right and wrong. However, if what we watch, read, and hear is being generated by something nonhuman, it doesn’t necessarily come laden with basic human morality. Yes, any artificial system will require a human behind the scenes telling it what to create and what to do, however, even this simple one degree of separation from the audience will make it easier and easier to commit atrocious acts with digital information. Elections will continue to be manipulated, populations will continue to be misinformed, and disinformation will move at the ever-increasing speed of digital processing.
Part of the challenge is that developers are creating unbridled new technologies without thinking of the implications and without equivalent technologies to counter their creations. This is like creating biological weapons without an antidote. In the case of GANs, we need to be thinking about what SANs or Skeptical Adversarial Networks could look like. How do we train digital skeptics even more powerful than the generators so that we know how to identify artificially created content and keep ourselves informed in the age of disinformation.
It is a beautiful thing to have more knowledge at our finger tips than any era of human civilization before, however, it is a travesty that so much of that information is a lie. As the internet continues to stockpile terabytes of data daily, we need to start thinking about how to separate the wheat from the chaff. We must create tools to discern real from fake, truth from lies, and artificial from human.