OpenAI former core staff collective venture, financing 100 million dollars for the “first heart”

Readers should be familiar with OpenAI. The San Francisco-based lab is one of the world leaders in basic AI research. OpenAI was originally a non-profit organization, but in order to better combine industry and research, and to retain employees with options, it was previously set up as a limited-profit company.

However, in December last year, a group of early/core employees of OpenAI left the company en masse, causing quite a stir in the field. This was not an ordinary job hopping, and the departing employees kept quiet for a long time, so much so that insiders did not know what they were going to do next.

Recently, this group of former OpenAI core employees finally announced their unprecedented grand plan: to solve the long-standing “black box” problem of neural networks and develop tools for researchers to explain how AI really works.

OpenAI former core staff collective venture, financing 100 million dollars for the "first heart"

They started a new company, Anthropic, to improve AI security and interpretability. The founding members believe that “artificial general intelligence” is coming soon and that AI security will be the most important area of research in the future.

The new company has also been given great expectations by investors. It is known that its Series A funding round has been completed with $124 million from Skype founder Jan Tallin (who led the round), Eric Schmidt, Dustin Moskowitz and others.

What kind of organization is Anthropic? Who are its founding members? What are the implications of what it is doing for the entire AI foundation and application space? Let’s take a closer look at this brand new organization, which recently left stealth status, and find out what it’s all about.

Making AI safer and more explainable

AI has always been a “black box”. Although neural networks have been described as being designed to work according to the neural principles of the human brain, it has been difficult to explain exactly how they work, how the input data is transformed into output, and what the specific principles and loops do.

In the opinion of a small group of researchers, considering that AI is being used in increasingly high-risk systems, such as autonomous driving, medical diagnosis, drug discovery, and even the military …… its unexplainability will block real technological progress and may even lead to irreversible and serious consequences if it is continuously ignored. .

However, since deep learning made an important breakthrough about a decade ago, the technology has become the foundation of AI technological progress. Increasingly larger models and greater computing power have made AI seemingly “omnipotent,” and uninterpretability has not been seen as a big problem by the public – which is why, while many in the AI field, especially those in the application domain, are not currently concerned with the black box problem of AI, they are not. This is why many people in AI, especially those in the application field, are not concerned with the black box problem of AI.

OpenAI former core staff collective venture, financing 100 million dollars for the "first heart"

AI’s black box problem: A black box is an algorithm that can turn data into something else. The problem is that black boxes often fail to explain the method of discovery while discovering patterns. Image credit | Topbots

Founded in late 2015, OpenAI’s mission is to enable general artificial intelligence and build a machine system capable of learning and reasoning like the human mind. Since its establishment, OpenAI has been engaged in fundamental AI research and is known mainly for its GPT language generation models (especially GPT-3).

But in fact, enhancing the interpretability of AI so that it can be applied more safely is also one of the research directions of OpenAI. It’s just that this part of the work seems to be less important in the face of the more famous GPT-3.

The group of employees who left at the end of last year, on the other hand, believe that as models get bigger and more powerful, general AI is getting closer and closer to us and is possible in the foreseeable future – and in such a premise, AI interpretability and security become incredibly important. This group of employees is considered the “effective altruists” in AI. Simply put, they believe in not only investing heavily in basic AI research to make the world a better place, but also in practical efficacy.

Their philosophy is not fundamentally at odds with OpenAI, but they are concerned about the trend of OpenAI becoming increasingly opaque and utilitarian. One of the most direct examples is that OpenAI has developed GPT-3 as a commercialized API for use by large companies in the industry before OpenAI has addressed bias and security issues.

OpenAI former core staff collective venture, financing 100 million dollars for the "first heart"

Eventually, this core group of employees left en masse at the end of last year. Many of them have been involved in the company that Silicon Star is introducing today, Anthropic.

So it might be more accurate to explain Anthropic this way: a “proper” AI basic research organization that recovers OpenAI’s slowly forgotten beginnings.

Anthropic’s website describes itself as an AI security and research company dedicated to developing reliable, explainable and tunable AI systems. “Today’s large-scale general-purpose (AI) systems can be highly profitable, but they are at the same time unpredictable, unreliable, and opaque. Our mission is to make progress on these issues.”

“Anthropic’s mission is to engage in fundamental scientific research that allows us to build more capable, general-purpose, and reliable AI systems, and to apply them for the benefit of humanity.” said Dario Amodei, Anthropic co-founder and CEO.

OpenAI former core staff collective venture, financing 100 million dollars for the "first heart"

The common vernacular answer to the question “what exactly is a neural network” is “a computing system that mimics the neural workings of the human brain. However, what many professionals are reluctant to mention is that we do not fully understand how the human brain works.

The current state of affairs in deep learning is that we are frantically applying this “half-understood” knowledge to neural networks and developing AI systems that are used in increasingly risky scenarios without thinking about interpretability and safety.

For example, we know that the larger the number of parameters in a neural network, the more generalizable it seems to be, but we cannot explain how certain parameters are related to each other, and what adjustments will lead to changes in the output; for example, we know that bias exists objectively in society, and it is inevitably projected into the data set made from social data aggregation, which is an important reason for bias in AI systems. But there is no other way to eliminate bias in AI systems for the time being than to inefficiently improve the datasets.

In an interview with Vox’s Future Perfect, Amodei said that AI researchers should increase their understanding of the inner workings of machine learning systems and use that knowledge to develop safer systems, rather than obsessing over “satellites” and blindly developing ever-larger neural networks. Why?

Why? For example, there is a classic incentive “distortion” in deep learning, where you try to get a machine agent out of a maze by designing scores (beans) to motivate it – only to have it obsess about getting a higher score, traverse every wrong path, eat It traverses every wrong path and eats all the beans, without ever considering getting out of the maze as its real goal.

As the neural network gets bigger and stronger, the gap between what it really does and what it was designed to do will only get bigger if there is some kind of subjective or objectively induced incentive distortion in the middle, with potentially serious consequences when applied in a real-world scenario.

It would be a great help for the next advances in AI if we had access to a tool that could provide a clearer and more straightforward understanding of how neural networks work, and know how to tune which parts of the network to be able to avoid such situations. All researchers would like to have this tool, but the problem lies in the fact that we don’t have such a tool right now.

And Anthropic’s mission is to develop such a tool and get it into the hands of more AI researchers.

Star Founding Team + Investors

Among the core employees who left OpenAI late last year were Dario Amodei and his sibling Daniela, who, according to LinkedIn, founded Anthropic in February of this year with Dario as CEO and Daniela as president.

Dario was one of the early employees of OpenAI and has published several papers on AI interpretability, security, etc. Before leaving, Dario worked as a research VP at OpenAI, and before that, Dario was a researcher at Baidu, working under former chief scientist Enda Wu. He graduated from Princeton University with a PhD and then returned to Stanford University, where he graduated as an undergraduate, as a postdoctoral scholar. He is a former core member of OpenAI and is considered one of the most cutting-edge researchers in deep learning.

OpenAI former core staff collective venture, financing 100 million dollars for the "first heart"

Dario Amodei Photo credit | RedHat

Dario’s sister, Daniela Amodei, previously worked at OpenAI in the same direction as Dario, as VP of Security and Policy, and her previous roles include Stripe (whose founder is an OpenAI investor) and the US Congress.

OpenAI former core staff collective venture, financing 100 million dollars for the "first heart"

A group photo of some of the members of OpenAI, with the Amodei siblings highlighted by red arrows Image credit | OpenAI

Anthropic’s founding team members, most of whom are key OpenAI employees or affiliates, include (in no particular order) Jared Kaplan, Sam McCandlish, Tom Brown, Gabriel Goh, Kamal Ndousse, Jack Clark, Ben Mann Chris Olah, and others.

These researchers have worked on many of OpenAI’s key topics, including GPT-3, the “microscope” neuron visualization tool, multimodal neurons in neural networks, accident analysis for safe design of AI models, reinforcement learning with human preferences, and more. -The loss of these key people could leave OpenAI at a slight disadvantage on related topics in the future.

As an example: Chris Olah is one of the authors of the OpenAI multimodal neurons paper. He is a well-known “geek” in the field who turned down a graduate school offer from Yoshua Bengio and went to the Google Brain team instead. He has led PhD students at Google and has more citations than researchers with PhDs, but he didn’t even graduate with a bachelor’s degree ……

OpenAI former core staff collective venture, financing 100 million dollars for the "first heart"

A brief introduction to the OpenAI multimodal neuron paper in which Olah participated: Years ago, researchers discovered that some neurons in the brain can respond to modally different but conceptually identical triggers, such as the name “Halle Berry”, a photo, or a sketch, when the same neuron is mentioned. The same neuron can respond to the name “Halle Berry”, a photo, or a sketch.

OpenAI’s multimodal neuron research is based on the institution’s CLIP generalized vision system, which was released this year. The authors found that such a “neuron” (Neuron 244) exists in the penultimate layer of CLIP’s neural network. This research suggests that the concept of “abstraction,” which is common in the field of natural vision, may also be present in the field of synthetic computer vision.

With such a strong team, Anthropic has become the latest hot target in the eyes of investors. The company has now closed a $124 million Series A round of funding and has officially come out of stealth status.

The Series A was led by Jaan Tallinn, formerly co-founder of Skype, and an early investor in DeepMind. (One of the main reasons why he was able to get the lead investment among a bunch of other big tech/big name investors is that he shares the same philosophy as the founding team of Anthropic and is one of the main funders of the “effective altruism” movement)

OpenAI former core staff collective venture, financing 100 million dollars for the "first heart"

Jaan Tallinn Image Source | Enterprise Estonia

Anthropic’s other investors include Dustin Moskovitz (Asana CEO and Facebook co-founder), Eric Schmidt (former chairman and CEO of Google), James McRaeff and others, definitely quite a luxurious lineup of investors.

Anthropic says the company’s money is mainly spent on computationally intensive basic scientific research, although it also has future plans for commercialization, which will not be the main goal in the short term. But regardless, the size of the $124 million Series A round is enough to show that Anthropic’s investors expect it to have every hope of catching up with OpenAI in terms of valuation.

In an era where more attention is being paid to basic AI research than ever before, Anthropic will undoubtedly be a strong “competitor” to OpenAI and DeepMind, and a research company to keep a close eye on.

This article is from WeChat: Silicon Star People (ID: guixingren123), written by Spectrum and Du Chen, edited by Vicky Xiao

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/openai-former-core-staff-collective-venture-financing-100-million-dollars-for-the-first-heart/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2021-06-15 00:07
Next 2021-06-15 00:21

Related articles