[New Wired] Can code write itself? If one day in the future, artificial intelligence can write code based on ordinary language. Will programmers be eliminated?
I’m a lazy and ignorant quasi-computer scientist, so I’m trying to get computers to program themselves.
In his latest tweet, Yann LeCun ‘lazily’ wrote.
In recent years, researchers have used artificial intelligence to improve translation between programming languages or to automatically fix problems. For example, the artificial intelligence system DrRepair has been shown to solve most of the problems that generate misinformation. Still, researchers dream of the day when artificial intelligence can write programs based on simple descriptions from non-experts.
In other words, it leaves us with a question: can the code write itself?
Recently, Microsoft revealed plans to bring GPT-3, known for generating text, to programming. CEO Satya Nadella said, “If you can describe what you want to do in natural language, GPT-3 will generate a list of the most relevant formulas for you to choose from. The code writes itself.
Charles Lamanna says GPT-3 provides the sophistication to help people with complex challenges and support for people with little to no coding experience. GPT-3 translates natural language into PowerFx, a fairly simple programming language similar to the Excel commands Microsoft introduced in March.
GPT-3 finally comes in handy!
Microsoft’s new feature is based on the architecture of a neural network called Transformer, which is used by major tech companies including Baidu, Google, Microsoft, Nvidia and Salesforce to create large language models using text training data crawled from the Web.
Last September, Microsoft took the exclusive license for GPT-3 from OpenAI, with Microsoft EVP and CTO Kevin Scott saying, “Let’s bring cutting-edge AI research to democracy!
So far, from the initial $1 billion investment, to the AI supercomputers designed for OpenAI announced by Build last year, to the exclusive license of GPT-3, Microsoft has successfully “captured” OpenAI.
OpenAI co-founder Lao Ma: the opposite of open.
Yes, OpenAI is not open anymore.
Other users simply said: OpenAI can also be renamed ClosedAI.
With such a layout, what exactly did Microsoft do with GPT-3?
This year’s Build conference, Microsoft announced.
Power Apps is Microsoft’s small program development software launched in 2015, belonging to an application of Power Platform, using it, without code programming, everyone can develop App like designing PPT.
Now, Microsoft has added GPT-3 for Power Apps.
Power Platform includes Power BI, Power Apps, Power Automate and Power Virtual Agents. These four components cover all the low-code development needs from non-technical to professional software developers.
Low-code development is a “drag-and-drop” visualization that allows developers to quickly develop applications with minimal code.
Microsoft has integrated GPT-3 into Power Fx, the low-code programming language used by Power Apps, for the first time, which undoubtedly opens a new journey of GPT-3 as a fundamental new technology for “commercial use”.
Power Fx relies on Microsoft Excel, which is easier to use than traditional programming languages, but in the past it was still a difficult learning process to create complex data queries with it.
For example, if we want to implement a command “find all users with expired subscriptions in the US”, in the past on Power Fx, we need to build a Power Fx statement to implement the lookup, but now with GPT-3, we just need to input our requirements in normal language expressions, and it can directly translate into Power Fx code statements for you, like as shown in the image below.
Power Apps is like typing a question into a search box and then selecting from the many search results. GPT-3 will return multiple Power Fx formula suggestions for your typed statements, and then developers can choose the formula they feel is most appropriate.
To ‘weed out’ programmers? Best model: only 14% success rate
While this feature is currently not a complete replacement for the code one would execute once understood, it can go a long way in helping developers make the right choices.
In a recent test, the best model had only a 14 percent success rate in an introductory programming challenge prepared by a group of AI researchers.
Nevertheless, the researchers who conducted the study concluded that the test demonstrated that “machine learning models are starting to learn how to code.
To challenge the machine learning community and measure how well large language models perform at programming? Last week, a group of AI researchers proposed a benchmark for automated coding using Python.
In that test, GPT-Neo, an open-source language model designed with a similar architecture to OpenAI’s flagship model, outperformed GPT-3. Dan Hendrycks, the paper’s lead author, said this was due to the fact that GPT-Neo was fine-tuned using data collected from GitHub, a popular programming repository for collaborative coding projects.
And the importance of using such an AI model is that it can facilitate the spread of ‘low-code tools’ to a larger audience, meaning that in the future everyone can ‘teach themselves’ and become developers.
At this point, the GPT-3, which has been officially put into commercial use, is no longer a watering hole for storytelling on reddit.
Will it become a bigger threat to programmers?
Some netizens say: it will!
What do you think?
Posted by:CoinYuppie，Reprinted with attribution to:https://coinyuppie.com/the-lazy-man-lecun-wants-to-let-the-computer-program-itself-netizen-10-gpt-3-short/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.