The OpenAI logo on a screen with the ChatGPT website displayed on a mobile phone in this file photo in Brussels, Belgium, on Dec. 12, 2022.
Jonathan Raa | Nurphoto | Getty Images
Attendees at the annual World Economic Forum couldn’t get enough of a new development in artificial intelligence: generative AI.
Priya Lakhani, CEO of online learning platform Century, said educators flocked to social media immediately after ChatGPT spoke about AI and how it could impact the education sector.
“It’s really amazing. What I’ve seen in the conversations on social media is that there are educators who see this as an enabling factor, and that’s exciting,” Lakhani said during a WEF panel discussing the potential and pitfalls of generative AI.
“They’ve overcome digital fatigue after the pandemic, they’re interested in technology, they’re using learning management systems, virtual learning environments, and they’re thinking, well, how can we use this and how can we use it as a means of engaging different contacts.”
Most machine learning tools rely on existing information and identify patterns in the data to identify trends or achieve a desired outcome. Recommendation algorithms in social apps like Facebook and TikTok show ads to users based on their browsing behavior.
Generative AI tools like ChatGPT and Dall-E stand out from the crowd with their ability to take data input and generate new content. People have used this technology to create everything from college essays to artwork.
Using services like Lensa AI turn your selfies into a variety of sci-fi and anime style avatars also proved popular.
Generative artificial intelligence has big implications for how children learn, Lakhani said, adding that the technology also increases the risk of cheating and plagiarism.
“Then you get the naysayers who are absolutely horrified, right?” she said. “They’re terrified because they think, wait, the kids are going to cheat on their homework. It has real-world consequences.”
This week at the WEF Forum in Davos, Switzerland, generative AI virtually replaced crypto and the so-called “Web3” as the hyped technology of choice for top business executives and politicians.
“Generative artificial intelligence has enormous potential,” said Hiroaki Kitano, CEO of Sony Computer Science Laboratories, at a generative artificial intelligence panel on Tuesday.
“It’s not just something that pops up out of the blue. We have a long history of deep learning,” Kitano said. “It’s like a continuous evolution of AI capabilities.”
Microsoft is reportedly betting billions on generative artificial intelligence in hopes that it will be corrective for his business — and for others, too. Last week, news site Semafor reported that the company was planning to invest $10 billion in ChatGPT creator OpenAI in a deal that valued the company at $29 billion.
Not everyone was convinced by the billions that suddenly found themselves in generative AI.
Jim Breyer, founder and CEO of Breyer Capital, said Microsoft’s investment in Open AI was good for the company from a strategic perspective, but he believes the Redmond-based tech giant is overpaying.
“To me, this is a sign of foam. This is a strategic deal for Microsoft, and they will soon catch up with Google and others,” Breyer told CNBC’s Sarah Eisen on Thursday.
“However, I cannot justify the valuation as a private investor.”
It’s easy to see why Microsoft is excited. ChatGPT has demonstrated the ability to come up with more creative responses than tools that generate mostly generic responses to user queries.
Take, for example, a person who wants to know what to do for his child’s birthday. ChatGPT can develop a plan for the day, including advice on what cake to buy or what games to play.
In this sense, ChatGPT was advertised as Google a disruptor that users can turn to, not a search engine pioneer. The chatbot’s new responses have even raised questions as to whether its rationalization process could indicate human cognition.
Altman acknowledged the limitations of ChatGPT, chirping in December that it was “a mistake to rely on him for anything important right now.”
“ChatGPT is incredibly limited, but good enough at some things to give the illusion of greatness,” Altman said at the time.
ChatGPT limitations include factual errors. Sony’s Kitano also said it’s important to acknowledge those limitations.
“At the same time, we see many limitations. When you ask ChatGPT a specific question, sometimes the answers are surprising. But if you dig into the details, all the facts may not be so accurate,” he said.
“If you go back and open the PC and ask about yourself, you’ll see, ‘Oh, I don’t understand,’ there’s all kinds of things going on.”
Without directly confirming the investment on Tuesday, Microsoft chief Brad Smith said generative tools like ChatGPT have already sparked talk of legal and ethical issues.
“What really needs to be imagined is what are the different ways this technology can be used? How can it be used for good, how can it be used to create problems?” Smith said on a panel moderated by CNBC’s Karen Tso on Tuesday.
One concern is that generative artificial intelligence could become a welcome weapon for hackers and other criminals, such as online disinformation operatives.
Researchers at cybersecurity firm Check Point say ChatGPT already used by hackers to recreate common malware.
“We may find that this is going to become more of a hot topic as people think about the future of information, the potential influence operations, the people who are creating misinformation, and also combating it,” Smith said.
This article is first published on Source link