You’ve probably been hearing about generative artificial intelligence (AI). Applications like chatgpt seem almost magical in their ability to answer questions, generate stories, and analyze images. It’s fun to play with them. Julie of Strict Julie Spanked posts lots of fiction created with the help of AI. She’s managed to get around the guard rails that are supposed to keep content free of sex and violence.
Generative AI is a very interesting area of computer science. It’s based on a model of how we think the human brain works. An algorithm called Neural Network simulates how we learn. We learn through experience, essentially trial and error. Each time something happens, good or bad, the neural path of that memory gets stronger.
A neural network operates the same way. The idea is simple, but the implementation is complex. Generative AI uses a neural network to try to predict the next word (or two in the case of chat gpt). This is where your mind may bend a bit. First, all the web sites selected (millions of them, including ours) are stored. Also the digital content of millions of books and periodicals is also added. Essentially, it’s everything ever written. The process isn’t discriminatory. It contains items in many languages, including programming languages.
The learning program grabs the first item and notes the first word. It then tries to predict the next word. If right, it increments a counter between the pair of words. If it’s wrong, it doesn’t add one. The program is building a map with paths bgtween words. Now that it has found the first two words, it tries to predict the third, fourth, etc. This goes on for the millions and millions of documents and the trillions of individual words.
In the end, it has what’s called a model. The model,, when presented with a query, finds the strongest path(s) for the request. I don’t understand how that works. All I know is that the model delivers sensible results. No one is sure exactly how a generative model works. It is capable of doing amazing things. It can also hallucinate.
The model can be asked a question, and it will answer, along with citations for sources that support its answer. When checked, however, the answer and the authorities are false. Somehow, the model created a rational but completely untrue answer. Computer scientists are trying to figure out how this happens and how to fix it.
When I put in the first chapter of a book I started, I asked what would happen next, It came up with plot twists I never considered. It was useful!
Generative AI is only possible because the cost of computing has dropped so much. Clouds like AWS or Google have millions of CPUs and virtually unlimited memory and storage. Since generative AI requires a huge brute-force effort to build the model, massive computing resources are needed. The model itself doesn’t need that power to execute. When you ask ChatGPT a question, you are interacting with the model.
Generative AI has started to bump into some very scary issues. These models have been created using virtually all human knowledge. They execute on computing resources large enough to let them “think.” Well, they don’t think, at least not yet. They do something close. I don’t know if there is a word for what they do.
New versions of these models can store how each user reacts to the responses delivered and change output to suit the way it “thinks” the user wants it to respond. The ultimate version of this sort of customization is for the model to be changed dynamically, learning the same way we do when we relate to another person. This is another step close to giving computers consciousness.
While this may seem very cool (It is!), it is also very scary. Did you read any sci-fi stories about computers taking over the world? Armed robots fighting wars? Well, if AI continues to advance, autonomous war robots will be possible in the near future. Computers, with the ability to think and act, could decide humans aren’t “logical” and declare war on us.
If this sounds farfetched, try cchatgpt and see how scary its ability to seem human is. I have an advanced degree in computer science. I got my Masters in 1996. There was no such thing as generative AI. We did look at neural networks and built small ones. I never guessed that one day (now!), we would be able to build a model using all human knowledge.
Is computer takeover inevitable? It’s possible unless we take action now. I’m not advocating outlawing neural networks, but I do think we need international treaties against autonomous war robots and laws limiting what computers can control autonomously.
When I was in school, we firmly believed that humans could never build a machine smarter than us. Alan Turing seemed to demonstrate this. We could build machines that manage data faster than us, but it was said, we couldn’t build a machine smarter than its creator. It looks like we may have been wrong. So far, we haven’t seen any limits on what generative AI can do.
|
Have you tried to enter your first book in Generative AI? (I liked your first book!) But would like to see how a different story would develop.
I am working on a different, non-sexual book now. Generative AI doesn’t really take a whole book. I’m not using it for the current project. The book is done and I am in rewrite mode now.