I have written a few things lately about using GPT to help with some projects I have going on. IT projects, writing projects, philosophical projects, and so on. More or less, it comes down to whatever I think of I will throw it at ChatGPT and see what it says. In most cases, I do not copy and paste what I get from GPT directly. In many cases what it prints doesn’t sound quite right to me. At times, I will copy a paragraph making only a few edits, while at other times, I will throw away everything it returned and write whatever it was from scratch.
As I was throwing questions at it while experimenting with GPT2, I ran into a loop of bad Python code. The project involved using the Hugging Face Transformers library with Python to train a local GPT2 model. I am not an expert Python programmer, but I have followed it since Python 2 and know enough to write a few simple programs and debug more complex programs. So I understood why the code was failing, but no matter what I tried, GPT3 kept repeating the same answers and code. Then I stopped and thought… the cutoff date for primary training on GPT3 was in 2021. This project deals with NLP, which is a fast-moving field, and I bet the code has drastically changed since then. And you know what, I was right.
I went out to the web, using pointers from the conversation with GPT3, and found that the Hugging Face repo now includes a script that does exactly what I wanted to do and includes a detailed manual on how to use it. So, as an experiment, I took this information to GPT and said, no, this is what works now. It recognized the new command as doing what I wanted to do and was able to analyze what the script was doing, but when I asked the same question again, I got the same code that didn’t work.
So, with this in mind, and I must digress, I did ask GPT a question to see how it would answer. This post is based on that answer but written completely from my point of view. So I asked, what is going to happen if people start relying too much on the data the GPT system has and not providing their own unique content, and the quick answer was, “relying too heavily on AI can lead to a homogenization of content, where everything begins to look and sound the same.” And this is what we need to avoid.
The point here is to ensure that we do not become so reliant on a system that we start reproducing exactly what it says. Without our unique content, the AI will become stale and no longer useful at some point. But what happens when it has pulled in all of the uniqueness that it can? Will it then become stale? Think about it, if an AI knows all things and everyone’s style, by default, it would become monotone. What if we look more toward creating individual AIs that match our style and personality and then train it with what interests us individually? But then, would it become a smarter version of you? Could it do your job? Could it fool people into thinking that it is you? What if you could train an AI to do your job? Is that AI then your property or is it the property of the company it is doing work for.
We are not far off from a world where this could become a reality. Are we ready to face this new world? I think some people are, but a lot of people are not. There is a fictional story in there somewhere, but this is real life, real-time, right now.
Where do we go from here?
Deep AI Thoughts