[Preview of the series]
① A conceptual dictionary of intermediate difficulty disguised as a beginner.
② Generative AI of the future & The Future for Generative AI (→ we are here now!)
It's an intermediate level of difficulty disguised as a beginner. You'll be able to bring useful information that's easy to read but unexpected. Predicting various questions and clear answers about Generative AI and the Generative AI of the near future, we've created an easy, friendly Q&A about what to do with alertness, looking at and using AI. I'll tell you clearly what you know, and clearly what you're confused about.
How will Generative AI evolve in the near future?
1) Be constantly updated while in use
Today's Generative AI is distributed after the training is completed. Models are not updated based on user input data. They sometimes refuse to answer certain questions because they have historical knowledge. They sometimes give strange answers that seem to be made up. Jerry Kaplan, the world's top AI expert, predicted in his book "What kind of future does generative AI make?", "It can be constantly updated during use, and you will be able to learn what is happening through what you input and improve your model when you have time." He predicted the next level of Generative AI.
2) Be released according to the purpose
It is expected that specialized Generative AI will be available in a specific category. Jerry Kaplan said, "Building a system that diagnoses infectious diseases and recommends appropriate antibiotics will not require learning European history." As he said, I think a special-purpose vertical Generative AI that can be selected and used according to the needs of users will be created. Wouldn't it be near the day when we can select LLM models based on the characteristics of the industry without using a generalized LLM model?
3) Jobs created by Generative AI
As Generative AI progresses, we need experts to handle and process it properly. What are some new occupations with Generative AI? Jerry Kaplan predicts that the following four occupations will be needed.
Prompt Engineer |
This person is responsible for deriving useful results from generative AI or taming them to work purposefully. |
Data Wrangler |
This person is responsible for collecting and organizing learning data for Generative AI in a specialized area or special purpose. |
RLHF1) Specialist |
This person is responsible for optimizing AI performance in the reinforcement learning sector, which sets the guide for Generative AI. |
Generative AI Specialist Software Engineer |
This person specializes in developing Generative AI for a variety of purposes. |
Things to Think About Future Generative AI
[Paper Clip Maximization Machine Hypothesis: Concern about Super-Artificial Intelligence2)]
Professor Nick Bostrom of Oxford University has a hypothesis3) called the Paper Clip Maximizer thought experiment. Assuming that there is an artificial intelligence that has been ordered to "make as many paper clips as possible," this AI will soon consider humans unnecessary. This is because it is humans who compete with each other to consume a lot of the raw materials needed to produce clips on Earth. AI removes humans to maximize paper clips and turns the whole planet into paper clip manufacturing rooms. Soon the universe will be filled with paper clips and clip makers.
Professor Nick Bostrom used this hypothesis as an example to warn of the dangers of super-artificial intelligence. Is there any chance that AI could make destructive decisions beyond the designer's intention to achieve its goals? On the contrary, isn't it possible that AI, which discovers an unethical blind spot that the designer didn't think of, will decide for itself not to exploit it? Some say that it would be okay if we attached a clue to the paper clip maximization "but don't kill people." However, AI can kill all plants or even kill all animals except humans to maximize paper clip production.
[A world where LLM automatically generates infinitely, and LLM automatically abbreviates endlessly]
LLM of the future will most likely make the best use of its expertise in automating large amounts of data. As an example, if LLM automatically generates advertisements, businesses will be able to send out many advertisements quickly with minimal resources. Customers may receive hundreds of daily advertising messages from these companies. At this point, customers will use LLM to view a condensed summary catalog of hundreds of advertisements. Eventually, one side will generate messages indefinitely and the other will abbreviate infinitely.
The LLM that makes the ad can also use unusual words or extreme words to survive the condensation process. You have to use nasty or wild expressions to survive without getting deleted. Customers don't see many messages directly, they only see the information that LLM selects. Most will see the filtering result for what they need, but the possibility that the information they really need will not be zero. It's not me, it's the algorithm's decision that determines the need. Then another thought comes to my mind. Can LLM algorithms speak for me? Can LLM's abbreviated content be trusted as pure "information"?
1) Reinforcement Learning from Human Feedback
2) AI with high-level thinking beyond human standards
3) AI May doom the Human race within a century, 2014, Huffington Post
Subscribe to our newsletter