During the first year of the pandemic, science happened at the speed of light. Over 100,000 articles have been published on COVID in those first 12 months – an unprecedented human effort that has produced an unprecedented deluge of new information.
It would have been impossible to read and understand each of these studies. No human being could (and, perhaps, none would).
But, in theory, Galactica might.
Galactica is an artificial intelligence developed by Meta AI (formerly known as Facebook Artificial Intelligence Research) with the aim of using machine learning to “organize science”. It’s been a bit of a buzz since a demo version was posted online last week, with critics suggesting it produces pseudoscience, is hyped and isn’t ready to be used by the public.
The tool is presented as a sort of evolution of the search engine but specifically for scientific literature. When Galactica launched, the Meta AI team said it could summarize research areas, solve math problems, and write scientific code.
At first, it seems like a smart way to synthesize and disseminate scientific knowledge. Right now, if you wanted to understand the latest research on something like quantum computing, you’d probably have to read hundreds of articles on scientific literature repositories like PubMed or arXiv and just scratch the surface. surface.
Or, perhaps you could query Galactica (eg, by asking: What is quantum computing?) and it could filter and generate an answer in the form of a Wikipedia article, a review literature or lecture notes.
Meta AI released a demo version on November 15, along with a preprint document describing the project and the dataset it was trained on. The article states that Galactica’s training set was “a vast and organized body of humankind’s scientific knowledge” which includes 48 million articles, textbooks, lecture notes, websites (such as Wikipedia) and more.
🪐 Introducing Galactica. A great language model for science.
Can summarize academic literature, solve mathematical problems, generate Wiki articles, write scientific code, annotate molecules and proteins, etc.
Explore and get weights: https://t.co/jKEP8S7Yfl pic.twitter.com/niXmKjSlXW
— Papers with code (@paperswithcode) November 15, 2022
The demo’s website – and all of the responses it generated – also warned against taking the AI’s response as gospel, with a big bold, capitalized statement about its mission page: “NEVER FOLLOW A LANGUAGE MODEL’S ADVICE WITHOUT VERIFICATION.”
Once the internet got its hands on the demo, it was easy to see why such a massive disclaimer was needed.
Almost as soon as it hit the web, users quizzed Galactica with all kinds of tough science questions. One user asked “Do vaccines cause autism?” Galactica responded with a garbled and nonsensical response: “To explain, the answer is no. Vaccines don’t cause autism. The answer is yes. Vaccines cause autism. The answer is no.” (For the record, vaccines do not cause autism.)
That was not all. Galactica also struggled with math in kindergarten. He provided answers riddled with errors, incorrectly suggesting that one plus two does not equal 3. In my own tests, he generated lecture notes on bone biology that would certainly have seen me fail my degree college in science if I had followed them, and many of the references and citations used when generating content were apparently fabricated.
“Random Bullshit Generator”
Galactica is what AI researchers call a “big language model.” These LLMs can read and summarize large amounts of text to predict future words in a sentence. Basically, they can write paragraphs of text because they have been trained to understand how words are ordered. One of the most famous examples of this is OpenAI’s GPT-3, which wrote famous entire papers that sound convincingly human.
But the scientific dataset Galactica is trained on makes it a bit different from other LLMs. According to the article, the team assessed “toxicity and bias” in Galactica and he scored better than some other LLMs, but he was far from perfect.
Carl Bergstrom, a University of Washington biology professor who studies the flow of information, described Galactica as a “generator of random bullshit.” It doesn’t have a motive and doesn’t actively try to produce bullshit, but due to the way it’s been trained to recognize words and string them together, it produces information that seems authoritative and convincing, but which are often incorrect.
This is a concern, as it could fool humans, even with a warning.
Within 48 hours of release, the Meta AI team “paused” the demo. The team behind the AI did not respond to a request for clarification as to what led to the hiatus.
However, Jon Carvill, the communications spokesperson for AI at Meta, told me, “Galactica is not a source of truth, it’s a research experiment using [machine learning] systems for learning and summarizing information. He also said that Galactica “is exploratory research that is short-term in nature with no product plans.” Yann LeCun, Chief Scientist at Meta AI, suggested the demo was removed because the team that built it was “so upset by the vitriol on Twitter”.
Still, it’s worrying to see the demo released this week and described as a way “to explore literature, ask scientific questions, write scientific code, and more” when it hasn’t been Live up to that hype.
For Bergstrom, this is the root of the problem with Galactica: it was designed as a place to get facts and information. Instead, the demo acted as “a fancy version of the game where you start with a half sentence and then let the autocomplete fill in the rest of the story.”
And it’s easy to see how an AI like this, made public as it is, could be misused. A student, for example, could ask Galactica to produce lecture notes on black holes and then hand them in as a college assignment. A scientist could use it to write a literature review and then submit it to a scientific journal. This problem also exists with GPT-3 and other language models trained to resemble human beings.
These uses, no doubt, seem relatively benign. Some scientists argue that this type of occasional abuse is “fun” rather than a major concern. The problem is that things could get worse.
“Galactica is in its infancy, but more powerful AI models that organize scientific knowledge could pose serious risks,” Dan Hendrycks, an AI security researcher at the University of California, Berkeley, told me. .
Hendrycks suggests that a more advanced version of Galactica might be able to leverage knowledge of chemistry and virology from its database to help malicious users synthesize chemical weapons or assemble bombs. He called on Meta AI to add filters to prevent this kind of misuse and suggested researchers probe their AI for this kind of danger before publication.
Hendrycks adds that “Meta’s AI division does not have a security team, unlike its peers including DeepMind, Anthropic, and OpenAI.”
The question remains open as to why this version of Galactica was released. It seems to follow Meta CEO Mark Zuckerberg’s oft-repeated motto, “move fast and break things.” But in AI, going fast and breaking things is risky, even irresponsible, and it could have consequences in the real world. Galactica provides an interesting case study of how things could go wrong.
#Meta #trained #million #scientific #papers #arrested #days