50 years of 10,000-character texts on the Internet deduce Web1.0 to Web5.0

If the most popular concept in the technology circle in 2021 is the “Metaverse”, then the most popular concept in 2022 must be Web 3.0. At present, it seems that Eshita, a blockchain researcher, was the first to define the concept of Web 3.0.

Web1.0: Readable Read

Web2.0: Read+Writable Read+Write

Web3.0: Readable + Writable + Owned

     Read+Write+Own

0

However, intuitively speaking, this division may not be accurate. After all, before 2004, there were a large number of BBS, communities and forums, as well as social software such as QQ, which could read and write information. Reading and writing should not be the essential difference between Web1.0 and Web2.0.Today’s headlines and portals of the year seem to be information distribution, but they are fundamentally different in terms of technology and business logic.

“Own” in Web 3.0 does not represent value. An asset, by definition, is something that can be measured in monetary terms, through transactional or non-transactional events, to bring benefits to an individual or business. A property located in a recession area that pays large annual property taxes can be a negative asset if it cannot be liquidated, cannot be circulated, cannot bring in cash flow, and has no value in ownership.

0

Some time ago, I saw a joke about Web1.0 to Web3.0 in the circle of friends, and I thought it was a bit interesting, so I combined some of my past experiences, opened my mind again, and added a little bit about Web4.0 and Web5.0 imagination. After that, many friends were interested in this joke, so they took the time to write this article to make the joke logical, so as to convince myself that the future is good, and it is worth working hard for decades.

This joke of mine also combines my own experience. I have studied automation and pattern recognition, studied brain-computer interface projects for one year, and then engaged in informatization, digitization and data technology related work for more than ten years, and spent a year of tossing In addition to his work, he also pays more attention to economic theory.

Let’s take a look at the deduction process from Web1.0 to Web5.0, and then expand the story of each stage in detail.

From Web1.0 to Web5.0

– Prediction of critical moments

Web1.0

1993~2004, Information Sharing and Interaction

Iconic launch event: The emergence of the Mosaic browser is one of the fires that ignited the Internet wave.

Reasons for change and impact on Web 2.0: The Internet generates a lot of data, but it cannot be profitable because of technical bottlenecks.

Web2.0

Since 2004, the value of data has been deeply excavated, resulting in Internet giants

Iconic launch event: Google published papers on GFS, MapReduce, and BigTable after 2003.

Reasons for change and impact on Web3.0: The value of intelligent data is deeply excavated, Internet giants begin to make profits, and form monopoly, which brings APP data gap in smartphones, and users need a more open and fair Internet. The Internet platform has been deeply digitized, incubating various technical bases for big data and cloud computing.

Web3.0

Digitization and peer-to-peer value exchange starting in 2018

Landmark launch event: Launch of the General Data Protection Regulation (GDPR).

Reasons for change and impact on Web4.0: data equality and peer-to-peer value exchange , breaking the barriers of APP and monopoly of Internet platforms, deep digitization of individuals and organizations, accumulation of complete data portraits that can incubate AI, quantitative changes lead to qualitative changes.

Web4.0

Expected from 2030, the interaction of consciousness

Iconic launch event: A complete personal-oriented AI assistant that passed the “Turing Test” of acquaintances.

Reasons for change and impact on Web5.0: In -depth simulation of human brain and behavior to complete the intelligent foundation required by Web5.0, human-machine integration can further liberate productivity.

Web5.0

It is expected to start from 2045, the interconnection of consciousness, the integration of human and machine

Predicting landmark startup events: The Turing test for human-machine fusion.

Web 1.0: The Era of Information Sharing

In the era of Web 1.0, Internet applications such as portal websites, chat software, BBS, and e-commerce shopping websites were born. The biggest feature of this era is networking and online . Any offline scene can get hot attention when it is moved online. Two reasons why the rise of Web 1.0 is important are the popularity of personal computers and the democratization of the Internet.

0

Due to the increase of Internet users, a huge amount of content and data is generated, the hardware cost of the server and the labor cost of the technical team have risen sharply, and Internet companies are under great pressure. The profit method of portal websites mainly relies on advertisements, but the page layout space of a web page is limited, and if too many advertisements are accommodated, the user experience will be affected. If you can’t generate personalized advertisements and accurate push, and earn advertising fees from multiple customers, then the revenue of Internet companies cannot support the huge infrastructure and technology costs. It is also difficult for social software in the era of Web 1.0 to store various data of customers on the server side, because the storage and processing costs of data are too high.

In Web 1.0 , most Internet applications can only publish, share and interact with information, and rarely do deeper value mining. The profit model of the first generation of the Internet has always been a problem, and this problem led to the bursting of the first Internet bubble. From 1994 to 2004, it was a complete cycle of the rise, bubble and recession of the first Internet wave.

0

In the era of Web 1.0, although Internet companies are famous, they are still better than those IT giants with enterprise informatization in terms of market value and profitability, such as Microsoft, Cisco, Intel, IBM and so on. The biggest challenge that the Internet brings to data technology is the huge amount of data, but the value of unit data is not high. Enterprise application and business process data are often abstract and refined data. If the enterprise-level informatization data and systems targeted by IT giants are compared to factories refining gold in mines, then Internet data processing is like panning for gold in a river. , these two scenarios require completely different technical systems. Internet companies need to collect, store and process data at a very low cost, and then monetize it through an accurate advertising system. 

Extended thinking

* In the era of Web1.0, how much time do you spend online and socializing online every day? 

Web2.0: The era of big waves of data,

Birth of an internet giant

The landmark event of the birth of Web2.0 should be that Google published papers on GFS, MapReduce and BigTable after 2003, which solved the cost of data storage, calculation and processing. Through internal self-research, Google has conquered these three mountains in the Internet field. Through the cost advantage of big data, Google achieved profitability early and went public in 2004.

0

Other Internet companies, some continue to subsidize Internet business with games, text messages, and even some marginal business profits, and have survived the difficult time without big data technology. Later, various Internet companies gradually engineered Google’s theory through open source and cooperation, forming the later big data technology and ecosystem, which became the cornerstone of Internet business.

The era of Web 2.0 is the era of industrialization of data, computing and products. The lower the cost of processing data and the higher the efficiency of the Internet platform, its monopoly position will gradually form. In the past ten years, various Internet platforms have emerged in various fields such as search, social networking, geographic services, and information release. These giants have used their own technological and scale advantages in data, not only through precise advertising. The value of data also poses huge challenges to traditional industries through the combination of data, traffic and scenarios. Some traditional industry companies are even afraid to cooperate with Internet companies, because they are worried that the moat they have built with decades of valuable industry experience will be easily broken by Internet companies through data and traffic.

0

With accurate data, huge traffic can be formed; with traffic, it is equivalent to controlling online marketing channels. Those products that are not highly dependent on manufacturing, supply chain, logistics and channels are basically unable to fight back before monopolistic traffic. Through big data and accurate modeling of thousands of people and thousands of faces, Internet giants have also begun to penetrate the financial field, and continue to expand their business scale through financial leverage.

The massive amount of personal privacy data allows some Internet platforms to guide users to purchase specific products, making users addicted to the content and products they put on. They use big data to kill familiarity, the same goods and services, the price will change after multiple viewings, and the price of old customers is higher than that of new customers. They only recommend products that can bring potential commercial benefits or even counterfeit products, rather than the most suitable and appropriate products for users.

Some platforms can even use data to manipulate individual desires, emotions, and even ideology, directing users to read specific articles, vote for specific people, or create specific biases against specific groups. They can even become proxy tools for specific political forces, influencing national elections. Even the president of a big country may be banned from the Internet platform and lose his position of public opinion.

In Web 2.0, due to the rise of smart phones, the era of web pages has entered the era of APP, and various drawbacks are particularly evident.

An unfair phenomenon in the Web2.0 era is that the majority of users contribute the data needed by the Internet platform, but the status of the two parties is not equal. Users contribute accounts and data, but the architecture of Web 2.0 is built from the perspective of Internet applications. For individuals, their data is stored in the servers of each APP. When the Internet application is closed, the user’s blog, article, friend list and relationship, and chat history will disappear from the Internet, and it is difficult for individual users to save it locally for a long time.

0

In the Internet era based on PC web browsers, various websites can jump to each other and refer to each other, and Internet users can easily subscribe to information on different platforms. In the era of APPs, some platforms are known as ALL IN mobile terminals, which greatly cut down the content and services of pure Web. If you do not log in, you will not be allowed to read the catalogue, and you will not be allowed to read the full text if you do not download the APP. Users have become the tools of data operation and traffic conversion, and become the cage of various APPs, but they have not enjoyed the openness and transparency brought by the Internet.

In terms of data security, personal data is excessively collected under the terms of “no login, no use” and “no consent to data collection, no use”. When the Internet platform manages user data, the disclosure of its management policies and technical processes is not open and transparent enough. There have also been security incidents in which data was overused due to lax internal supervision and external data was leaked.

Some Internet platforms have been labeled by users as monopolistic, domineering, and indiscriminately using algorithms. All of these are contrary to the original intention of Internet development, and Internet users look forward to some changes in the future.

Extended thinking

* In the era of Web2.0, do you know how your data is used? 

Web 3.0: The Pervasiveness of Digitization

and exchange of equivalent value

To regulate the expansion of Internet platforms and the use of data, Europe has enacted the General Data Protection Regulation (GDPR), and China has enacted a data security law. The seven data rights that data subjects enjoy under the GDPR are: access, rectification, erasure (“right to be forgotten”), restriction of processing, portability, objection, and the right not to be subject to automated decision-making.

At the same time, under the influence of the distributed and decentralized philosophical trend of blockchain, the technology circle also hopes to realize a brand new Internet in a more transparent, fairer, more open, more decentralized and value-connected way . Individual users not only care about the power of data, but also how to share value under the new Internet architecture system, which is the birth of the concept of Web3.0. The popular ICO, cryptocurrency, Defi, GameFi, NFT and other concepts emerge one after another, making the concept of Web3.0 hotly discussed in the media, investment circles, and technology circles. Many people think that Web 3.0 is the next-generation disruptive Internet architecture, and many people think that Web 3.0 is just a hype of an idea, which is difficult to really implement, and will only be a piece of shit in the end.

If it is said that the problems that Web2.0 brings to users are monopoly, algorithm opacity and data abuse, then Web3.0 needs to do better in distribution, privacy, open source, trust and connection, so that Internet users can truly share The benefits of Web 3.0. In the era of Web 2.0, even if the copyright of works belongs to users, it is difficult for users to monetize their works or data because traffic is completely controlled on the Internet platform. Therefore, in the definition of Web3.0, the value exchange of information equivalentreplaces the concept of “Own”. If the resources cannot bring the expected benefits, the user’s ownership cannot reflect the value.

0

Negotiating peer-to-peer value exchange with Internet giants requires strength and resources in addition to legal protection. There are two ways to implement a peer relationship.

The first way is to completely exclude the previous Internet platform and establish an independent Web3.0 ecosystem through peer-to-peer individuals or by limiting the scale of individuals. A blockchain architecture like Bitcoin is a very complete system with fair and clear rules for most individuals, but such an architecture cannot support the massive users and application scenarios of Web 3.0.

Bitcoin’s architecture is so perfect that it feels cold-blooded, and the game seems to be designed for robots.The two most critical factors in Bitcoin are energy and computing power, which constitute the basic elements of survival in the machine world. Imagine a world full of robots, where robots rely on energy to generate bitcoins, and they can also exchange bitcoins for energy. Robots with more energy and better computing power can easily eliminate other robots. Maybe the moment when the blockchain shines, we have to wait until the Web5.0 era.

If the closed-loop blockchain architecture is not used, many projects are dressed in the coat of Web3.0, which is very confusing, making it difficult for the public to distinguish right from wrong. For some chaos, you can refer to this article: “Various Chaos in Web3.0: Talking about StepN and NFT “.

The second way is to strengthen the data management, technology and value exchange capabilities of ordinary enterprises and individuals, participate in the original system, and dance with the Internet platform.It may be more practical to use the philosophical thinking and technical system of the blockchain to make full use of the existing technology and legal guarantees to build Web 3.0. So far, the GDPR is only a series of regulations, and there is no specific technology and product corresponding to the regulations one by one. The development and implementation of the entire Web 3.0 should take longer than you think. If the core of Web3.0 is data equality and peer-to-peer value exchange, and data equality is for better and large-scale peer-to-peer value exchange, then various explorations can be carried out around these two points.

The impact of Web 3.0 on individuals

Most companies have already completed the basic digital construction, and it is easy to trace the past records through various systems. Enterprises will keep data under their control even if they use SaaS application software or public cloud. But for individuals, where are most people’s data other than photos, documents, and various notes? Personal digitization is not just a pile of photos and documents, just like an enterprise-level ERP application is not just a pile of files and data.

For example, individual users have various banking and financial management apps on their mobile phones, but rarely have a trustworthy general ledger manager to help them manage transactions and data in each account. Although there are dozens of APPs in personal mobile phones, some APPs record their own running data, some APPs record their own sleep data, and some APPs record their own weight data, but when you want to aggregate these data together When doing an attribution analysis, it is almost impossible for non-technical people.

Due to business adjustments, the running software NRC APP will stop serving in mainland China from July 8, 2022. Although users can export the data they need from the NRC APP, the raw data does not have much value to the user, and the original latitude and longitude records also need the application to be understood by the user. When you change to a new fitness app, can you still use the previous data and records?

The inventor of the HTTP protocol, Tim Berners-Lee, proposed a concept of the Semantic Web in 1998, the core of which is: by adding documents (such as HTML documents) on the Internet that can The semantics (metadata) understood by computers, thus making the entire Internet a common information exchange medium. However, after entering the APP era, the Internet platform has not developed in this open direction.

Dissatisfied with the monopoly of Internet platforms on data, Tim Lee made another attempt and released Solid’s decentralized platform in 2018, which is not a blockchain platform, https://solid.mit.edu/. Solid’s design idea is that everyone can have a data POD, which can be set up on their own server or hosted by a third-party website. When a user accesses an Internet application, the data remains on the personal Solid data POD, which separates the Internet application, platform data and personal data. Solid is only the first step. Personal data is stored on POD, and it is also necessary to maintain data consistency and integration.

0

When a user wants to migrate his articles on the Internet platform, he may use a plug-in tool such as beepress or wxsync to synchronize the articles to the open source WordPress system deployed by himself.When users want to collect and organize the content they read in various apps, they may use tools like Cubox. The data wall created by APPs in the Web2.0 era has made personal digitization much more difficult. In the Web1.0 era, the functions implemented by browser favorites, which were very simple, now require many professional tools to complete. In the era of Web 3.0, there should be better tools to help individuals achieve deeper digitalization, which may be a good point of opportunity.

Most established businesses already have data platforms in place. In the era of Web 3.0, individuals also need a data steward of their own to manage all their data storage, analysis and interaction methods. When the APP needs to call data, the housekeeper decides which data can be called and whether the APP needs to pay for the cost of data call; when the APP generates data, the personal data needs to be stored in the personal data manager; when When the APP stops operating, it is necessary to hand over the personal data to the personal data steward in an easy-to-read way, and use another open source or free application to take over the data; the personal data steward can also associate the data generated by multiple apps. Analysis; Personally created works, such as articles, videos, etc., are also stored in the personal data steward for the first time, and then exchange data and value with various content distribution platforms through the interface.With the passage of time, all kinds of personal data will be kept in the personal data steward for a long time, forming a virtual impression of the individual, and finally producing a sufficiently intelligent AI digital human. The incubation of AI by data accumulation is the development of Web4.0 an important foundation.

If personal data wants to gain value, it must be reflected through services or products. In the era of Web 3.0, individuals also need to build their own capabilities into standard products, so that they can better conduct peer-to-peer transactions. In the past few years, whether it is self-media, public accounts or short videos, there are many experts who are commercializing their own value and gradually forming clear personal portraits. In this regard, the Slogan of the WeChat official account is quite in line with the values ​​of Web3.0.

0

The difficulty of personal digitization is : the needs of individuals are specific and clear, but each person’s personality and privacy needs may be different. Will there be a good tool system to accelerate the digitization of everyone?

Achieving Web 3.0 requires breaking the data wall between APPs

Breaking the Internet platform’s monopoly on data is also inseparable from the protection of personal privacy by mobile phone manufacturers. Apple CEO Tim Cook said in an interview with Time magazine: There is more information about all of us than ten or five years ago, it is everywhere, you are leaving a digital footprint everywhere .

Assuming that the network speed is fast enough and the latency is low enough, perhaps the ecology of cloud mobile phones can accelerate the birth of personal data stewards. If the cloud mobile phone solution is feasible, the mobile phone can only leave the battery, screen, camera, communication and encryption and decryption functions, and the calculation, storage and application are all on the cloud mobile phone. If hardware upgrades are required, one-key upgrade and expansion can be implemented in the background; if the APP needs to be upgraded, it can be upgraded in a virtual hardware environment without the need for compatibility testing with complex hardware.

In the current Internet architecture, a large number of applications need to be registered with mobile phone numbers, and mobile phone numbers directly correspond to the most private data of individuals – ID card numbers. Once exposed, it will have a great impact on individuals. By connecting the virtual identity generated by the personal digital butler in the cloud mobile phone with various APPs, a layer of protection can be added to the real identity to achieve a higher level of data security protection and avoid privacy leakage.

Cloud mobile phones may also avoid the blockade of high-end mobile phone chips, and realize a new architecture system in the way of overtaking in corners. If the cloud mobile phone solution can be implemented, operators will be able to turn over from the current status of data channels and transform into the main battlefield of digitalization in the future. Since cloud phone technology has high requirements on bandwidth and delay, operators’ communication base stations and edge data centers can form a better combination and better dominate the cloud phone market. In the cloud mobile phone system dominated by operators, personal digital stewards between cloud mobile phones can realize a supervised ad hoc network, share distributed data and application systems, and fight against monopoly Internet platforms.

The value of network links in Web3.0

For Web3.0, the individuals and enterprises involved are in a peer-to-peer relationship, and they need to go through complex network links when exchanging value, not just through a single Internet platform.

For example, when a company is recruiting, it can spread through a small program with clear incentives, and every user who clicks, spreads and signs up will be recorded through encrypted information. After multiple link propagations, the enterprise can check the information of the final applicants and distribute incentives to all the contributors in the middle of the link. Although such a small program is not a complete blockchain architecture, if the company can maintain its long-term credit in incentives, it should be an alternative to the traditional website recruitment method. After all, it is a win-win for companies to recruit and introduce opportunities to friends. In this way, accurate information matching can be achieved, and to a certain extent, the privacy of users on the link can be well protected.

0

Because administrative divisions are not consistent with geographic distance, information transmission cannot be prioritized based on administrative divisions alone. For example, Xinyang, Henan likes to eat hot and dry noodles like Wuhan, Hubei. It is closer to Wuhan than Zhengzhou, the capital of Henan. If a Wuhan company goes to a college in Henan to recruit, and the recruitment information is transmitted to the fellow villagers in Xinyang, Henan, the feedback may be much more than transmitted to the ordinary graduating class group. However, such accurate information transmission requires the dissemination, feedback and verification of information and value through the network, in order to finally obtain the best matching path, reduce costs, and benefit both enterprises and individuals.

In the era of Web 3.0, it is not about going backwards to the eve of the Internet, where everything depends on relationships and offline actions; it is about making this offline information and value digitized, networked, and transferable. The technology of deep web information mining has become more important. It is possible that the enhanced version of the graph database will become the master of the times, and various small and medium-sized information data sites will be able to flourish.

The impact of Web 3.0 on non-Internet companies

In the age of no computer, currency is the best “digital” tool. People use currency to measure the economic value of individuals and enterprises participating in social activities.

In the cases studied in the past, Shell is a company that has successfully digitized a very traditional industry and low-frequency business. A very important reason for the success of Shell’s digitalization is to define and calculate the value of different organizations and links within a business, with clear boundaries, division of labor and specialization, to achieve marketization and monetization of internal and external collaboration, and ultimately to achieve large-scale business expansion and platform. Ecological.

0

After the technology and rules of digital currency mature, digital currency can not only be used for external business settlement of enterprises, but also for internal value settlement of enterprises. In the future, every large enterprise will have its own digital currency system. Establishing a value system of business, organization and process within an enterprise as soon as possible, and constantly comparing it with external suppliers and service systems, can increase the external cognitive certainty of the value of various resources of the enterprise.

The combination of value and digitization breaks the boundaries between traditional departments and companies, and clear rules can help companies achieve scale expansion and achieve a healthy platform ecology. Enterprises can try to use the concept of Web 3.0 to create industry alliances or upstream and downstream cooperation systems in the supply chain, open data to each other, digitize cross-enterprise business, and deepen the concept of digital popularization and peer-to-peer value exchange. Do it thoroughly and explore the best practices of Web3.0 in the cooperative competition. The contract economy and digitization do not necessarily have to rely on the blockchain, and electronic contracts can also be used to achieve most of the needs.

Enterprises’ data management system is relatively mature. In some high-frequency and standard application scenarios, such as customer service and other systems, AI assistants can be introduced to continuously iterate to become virtual employees of the enterprise. For low-frequency and complex scenarios, some companies have introduced AI robots into departmental chat groups to continuously learn data generated by employees’ questions and answers, and learn some unique business terms and business logic of the company or department. For data gaps caused by some commercial software or Internet platforms, RPA tools with AI capabilities can also be used to intelligentize the business processes of enterprises.

Not every enterprise can have the massive data of Internet companies. When the data and traffic on the marketing side are controlled by the Internet platform, ordinary enterprises can focus on their own advantageous areas, such as manufacturing, supply chain system, dealer management, Do a deep and thorough research on the supply side, and study the deep relationship between numbers and values ​​in each link, so you don’t have to worry too much. It is not advisable to blindly move away from the real to the virtual. It is impossible for enterprises to rely on raw data to exchange value. The purpose of data accumulation is to conduct large-scale transactions at lower cost and higher efficiency. Transactions cannot be separated from products and services that empower entities. There is a lot of wisdom and experience in social production, manufacturing and consumption, some of which have been iterated to become best practices and first principles, and their formulas are more effective in guiding business than data methodology. If Web 3.0 can really be realized, non-Internet companies will no longer be at a disadvantage in terms of marketing and traffic. At that time, the competition will be the strength of the supply side.

0

The relationship between enterprises and employees can also be explored by Web3.0. In most cases, companies are in an advantageous position relative to employees because companies have more information and data. The concept of Distributed Autonomous Organization (DAO) was first proposed by American writer Ori Brafman in a book called “The Starfish and the Spider”. In his book, he likens centralized organizations to spiders and distributed organizations to starfish.

Centralized organizations will not disappear in the future, and it is difficult for businesses with complex links to work in a decentralized way. Organization is one of the most powerful weapons of human society. Only with organization can a huge and complex industrial system be established, and great innovations such as landing on the moon can be achieved. Large enterprises are unlikely to use a distributed self-organization method for complex production, but for some new businesses, enterprises can use the amoeba agile group model to explore with the concept of DAO. Enterprises can use the concept of peer-to-peer value exchange to evaluate the contribution of employees and corporate resources in each link of innovative business, and continue to incubate new business and organizational forms accordingly.

When sensitive data of the enterprise is not involved, enterprises can also explore the relationship between enterprises and employees in data governance. A business may be a place where employees spend the most time. What data should be owned by employees? Do employees have a complete picture when looking back on past experiences? A virtual impression of an individual is incomplete without data at work.

The difficulty of enterprise digitization is that the needs of organizations are abstract and changing, and organizations need long-term exploration and polishing to form their own methodology and systems.

The difference between the shared platform in the Web2.0 era and Web3.0

In the era of Web 2.0, there are also many platforms and ecosystems that share benefits. For example, in addition to self-operated businesses, many e-commerce platforms also have various small and medium-sized sellers; each driver is also an independent “partner” of the ride-hailing platform, and can move freely between platforms; on the content creation platform, creators can also obtain the platform reward. Does such a platform actually belong to the Web3.0 era?

These platforms have achieved a certain degree of openness in value sharing, but most platforms are not open and transparent about the profit sharing model and algorithm; in addition, the platforms do not consider the perspectives of participants, and creators have high costs to switch platforms. At this point, open source can make the sharing platform truly move towards the Web3.0 era.

Some radical companies may be able to achieve transparent information disclosure and gain higher trust from customers, employees and investors through completely open source methods, such as open source code, open business models, open financial data, and open non-sensitive business data. For example, Gitlab, an open source company, has also opened up its own employee manual and management methods on the Internet, so that employees who work remotely around the world can better integrate.

I personally look forward to seeing an open source community organized in a Web3.0 manner. The operation and data of the open source community are relatively open and transparent. If the contribution value of the platform, code contributors, community participants and early users can be reflected In the project, the contribution value of past, present and future participants is constantly balanced, and the benefits are eventually returned to all contributors. Perhaps a landmark event of Web 3.0 is to build successful projects in this way, and be listed on traditional exchanges to gain public recognition.

Will Web 3.0 be a technological step backwards?

In the 1980s, only the giants had large computers, but it was difficult for individuals and ordinary businesses to afford them. The transition from mainframe computers to the PC era is considered by many to be a step backwards in technology. Mainframes and minicomputers back then were many times more powerful than personal computers by a variety of performance metrics. But if the era is still stuck in the mainframe era, then the rise of the Internet becomes impossible. The Web 1.0 era is also in the late stage. Through distributed clusters, the computing power of cloud computing exceeds that of traditional mainframes.

Today, 40 years later, only large Internet platforms have a large amount of data. Individuals and ordinary enterprises are only providers of data, and cannot fully utilize the dividends of the data age. Today, everyone can easily buy computers and mobile phones, but hardware does not equal software, software does not equal data, data does not equal information, and information does not equal value. The digital level of many individuals and ordinary enterprises is still closed and fragmented.

Regulations similar to GDPR will inevitably impose various constraints on the technical architecture system of Internet platforms, and the difficulty of data collection, processing, application and archiving will also increase significantly. The distributed system required by Web3.0 may not be as efficient as the centralized Web2.0 Internet platform in the initial stage. And the distributed system required in the Web3.0 era is not just a distributed computing system, not just a cloud composed of tens of thousands of computing nodes, not just a large distributed database, but a combination of many individuals and organizations. It is a complex system of distributed, privacy, open source, trust and value connection.

In addition to the technical system of blockchain, digital currency, electronic contracts, privacy computing, federal identity management account system, deep web information mining, personal digital stewards, AI assistants, application of civilians, digital thinking of all employees, alliances and upstream and downstream Symbiotic thinking, etc., are important technical and cultural foundations for the establishment of the Web3.0 system.

Perhaps in a period of time, the technical level embodied by Web3.0 cannot surpass the Web2.0 era.However, through the construction of Web3.0, individuals can be deeply digitized, providing a comprehensive data foundation for the arrival of Web4.0. If there is no basis for data equality and peer-to-peer value exchange, in the era of Web4.0 and Web5.0, the Internet platform will monopolize everything, and its side effects will be unimaginable.

Extended thinking

* In the era of Web3.0, how not to be fooled by various concepts? 

Web4.0: AI + brain-computer interface,

conscious interaction

The development of AI has changed from quantitative to qualitative

The world’s first general-purpose computer “ENIAC” was born at the University of Pennsylvania in 1946. Soon after the birth of the computer, the contest between AI and the human brain began. Alan Mathison Turing, the pioneer of computer science and cryptography, wrote a paper in 1950, “Computing Machines and Intelligence”, in which he predicted the possibility of creating intelligent machines and proposed the famous Turing test : A machine is said to be intelligent if it can have a conversation with a human without being identified as a machine. The Turing test was the first serious discussion of the philosophy of artificial intelligence.

Students who studied AI before 2006 may feel that no matter what algorithm is used, it is difficult to meet general scenarios. Even license plate recognition and face recognition, which seem to be simple now, depended on the skill of algorithm engineers to adjust parameters. It may work in a specific scene, but switching to another similar scene cannot meet the requirements.

0

After Hinton, a professor at the University of Toronto in Canada, proposed a new idea of ​​deep learning in 2006, the development of artificial intelligence began to enter the fast lane. With the development of the Internet providing abundant big data resources and the hardware performance improvement of GPU, the development of AI finally ushered in a qualitative change in 2016. In March 2016, Google’s AlphaGo defeated the Go world champion and professional 9-dan player Lee Sedol with a total score of 4 to 1 in Seoul, South Korea. In May 2017, the evolved AlphaGo defeated the world number one Chinese player Ke Jie 3:0 in the “Human Machine War 2.0”.

Today, everyone thinks that AI is still relatively mentally retarded. Whether it is a smart speaker at home or a conversation with an AI customer service of an e-commerce business, everyone finds that AI does not really understand itself, and cannot understand and answer many of your personalized questions.

The underlying reason behind this is that the current big data accumulation is all from the perspective of the Internet platform, rather than the perspective of individual users.

Even though the Internet platform holds a large amount of data, it is fragmented and incomplete in the individual dimension. It is also impossible for individuals to hand over their complete data to internet platforms due to data privacy concerns. But imagine if a personal digital steward has a digital record of a person’s life, such as collecting and recording videos of a person’s life, every stimulus, feedback and action, every reading content and notes, every conversation and thinking , then such data is enough to train AI to understand a person.

If Web3.0 can really be realized, it is conceivable that the level of personal digitalization will develop by leaps and bounds. With the development and progress of technology, the threshold of “personal digitization” will be greatly lowered, more and more personal digital portraits will be fully recorded, and virtual digital people will become more and more aware of personal needs.

The brain and AI work similarly

As scientists and engineers improve the AI ​​level of computers, another line of research has also made great progress, that is, the study of the working mechanism of the brain. I recently read a very interesting article:“Entropy, Free Energy, Symmetry and Dynamics in the Brain” .

The human brain is to a certain extent a Bayesian model, generating an internal model that continuously predicts and judges the future, then continuously compares the predictions with sensory input, and uses feedback to verify and update the internal model. From this perspective, the current model of AI training is very similar to what the brain does. Taking Texas Hold’em as an example, humans make their own bets and judge their opponent’s bets by observing their own starting cards, public cards, opponent’s playing behavior and opponent’s past historical records, and combining with the opponent’s expressions and actions at that time. one-step action.

0

Texas Hold’em has been one of the toughest problems in artificial intelligence because of the “hidden information” involved in poker games. You don’t know what your opponent’s hand is or what your opponent thinks of your range, and winning the game requires the successful use of bluffs and a variety of other strategies. These strategies are different from transparent games such as chess and Go. In comparison, the problems faced by Texas Hold’em are more like real scenarios in the life and work of real human society, which makes Texas Hold’em the most interesting field for AI scientists one.

In 2015 there was the concept of Texas Hold’em Solver. Compared with the Alpha Go/Alpha Zero fully automatic AI algorithm in the field of Go, Solver is more like an auxiliary off-site calculator, estimating the Range of the opponent’s hand and the Range of the opponent’s guess of your hand. With this auxiliary calculator, humans can reduce the computational complexity and put more energy into the final decision.However, for professional Solver, it is difficult to complete real-time calculations on mobile phones because of the large amount of calculation.

And the fully automatic Texas Hold’em AI algorithm like Alpha Go has also achieved great results. Libratus achieved very good results in the two-player game, while the Pluribus algorithm performed well in the six-player game, and Deepstack and Poker CNN, which used deep learning algorithms, also achieved good results.

The Texas Hold’em game has certain particularities. There are a lot of hidden information in the game, and the participating players are also affected by their emotions and physical strength, and the trend of the game has considerable random interference. Algorithms cannot guarantee a 100% win rate in Texas Hold’em, and AI may not be able to completely defeat the best human players. However, if an AI can record all the historical data of a human player, including data of various body expressions, through a lot of training, the AI ​​should eventually be able to simulate the playing style of the human player. Although the AI ​​trained in this case may not be able to play the same cards as the imitated object every time, it should be able to maintain a very good consistency from the statistical dimension of multiple rounds of cards.

In addition, the human brain processes information in about 100ms. If AI can also give feedback in the same or less time in a portable computing device, then it can be considered that AI can simulate the working mechanism of the brain well. After all, the human brain is not an infinite container of information. AI simulation of the style of human beings when playing Texas Hold’em should be realized soon.

AI reads human consciousness through brain-computer interface

In addition to input and feedback, the study of the brain also requires in-depth research on the entity of the brain. The brain-computer interface is a very practical technology. At present, most of the brain-computer interfaces just read information from the brain.

As early as 1857, R. Caton, a young British physiological scientist, recorded the EEG activity in the monkey brain and published the paper “Research on the Electrical Phenomenon of Brain Gray Matter”. In 1924, the German psychiatrist H. Berger really recorded the brain waves of the human brain, and the human EEG was born.

0

When I was doing brain-computer interface research 20 years ago, the EEG acquisition equipment was not very sensitive. In order to improve the sensitivity, the experimenter might even shave their heads and apply conductive gel. A common experimental scenario is that the experimenter puts on the electrode cap for EEG acquisition, and controls the mouse on the screen to reach the designated area through EEG signals and algorithms. In most public records of the year, the accuracy rate was only 75%, in which case it could take up to a minute to complete a single character entry.

In the past 10 years, the field of brain-computer interface has developed by leaps and bounds.

In April 2021, the brain-computer interface company Neuralink released the demo video of “Monkey MindPong”. A monkey is playing a computer game with “mind” effortlessly. In this system, the system is first calibrated through the normally connected joystick, and then the signal output and algorithm of the EEG is used to accurately simulate the signal of the wired joystick; after the system is calibrated, even if the connection of the joystick is disconnected, the monkey will The game can be completed by relying on the signal output of the EEG. The demo below shows that the connection between the joystick and the display has been disconnected. But Neuralink’s brain-computer interface is an invasive system that requires surgically implanting a chip into the monkey’s brain and then reading brain signals through a USB-C interface.

0

In May 2021, teams from Stanford University, Howard Hughes Medical Institute (HHMI), and Brown University used brain-computer interface technology to convert paralyzed patients’ “handwriting” in the brain into screen words, and published a paper in the journal Nature “High-performance brain-to-text communication via handwriting”. They combined AI software with brain-computer interface devices to decode “handwritten” handwriting using neural activity in the motor cortex of the brain, and used a recurrent neural network (RNN) decoding method to translate handwriting into text in real time, quickly converting patients’ thoughts on handwriting for text on computer screen. The experimenter can input 90 characters per minute, which is close to the typing speed of a normal person on a smartphone. This performance is very close to a practical scenario, enabling AI to understand the expressions in the human brain.

0

Thoughts are constantly flashing in the brain, not necessarily every thought has verbal expression and physical feedback, such as the struggle and thinking of players before the official betting in Texas poker, these conscious activities in the hidden layer are also very important for AI training. valuable. After all, human beings are constrained by society and the environment, and many ideas cannot be realized, especially in the decision-making of some major events, not everything can be determined and implemented, and people cannot make too many major decisions in their lives. If only the final expressed language and actual landing actions are used as the source of AI training, the sample size will be too small, and the most important parts of human nature will be missed, making AI unable to simulate human decision-making at critical moments.

3 directions for AI to simulate the brain

1. AI recognizes abstract but simple concepts, such as the recognition of images, sounds, etc., to simulate the function of the brain.

2. Through continuous testing of prediction-feedback and complex game scenarios, using comprehensive personal data, use AI to fit human feedback to various inputs.

3. Through the research on brain-computer interface, on the one hand, we deeply mine the hidden working mechanism of the brain and those ideas that cannot be expressed, and shape a more complete AI; on the other hand, we provide a good way of human-computer interaction. Simple way to express your awareness.

In the future, people may wear portable EEG devices to train their own AI assistants. Whether it’s gaming or car driving, it’s a perfect scenario for iterative optimization of personal AI assistants. When the AI ​​assistant knows you enough, people can copy multiple AI assistants to handle different jobs, and everyone should be able to spend more time on family and leisure. At this time, the AI ​​assistant should be called Qualify.

In the era of Web 4.0, the replication of consciousness, the interaction of consciousness in virtual space, and the reading of brain consciousness by computer can be realized.

Extended thinking

* In the era of Web4.0, how to protect one’s own consciousness from being used legally? 

Web5.0: The Era of Human-Machine Integration

Maybe everyone thinks that the formulation of Web5.0 is a bit too advanced, but many technologies in the Web5.0 era are already sprouting. The definition of human-computer integration, borrowing the standard of Turing test, is that whether it is in network communication or actual communication, it is impossible to distinguish whether it is a machine or a human.

0

First, let’s start with humanoid robots.

The player with the most attention in the field of humanoid robots is Boston Dynamics. In 2009, a prototype of Boston Dynamics’ bipedal robot Petman was unveiled, at which time it needed to drag a cable to dangle on its tracks. In 2013, the Atlas prototype, which began to take on the shape of a human, was unveiled. At this time, Atlas was able to walk on gravel piles, “golden rooster independent”, and withstand the impact of a large pendulum ball. Two months after the video was released, Boston Dynamics was acquired by Google parent Alphabet. In 2017, Boston Dynamics was acquired by Japan’s SoftBank Group. The change of owners did not affect the rapid growth of Atlas, its movements were smoother, and it was able to climb steps, do backflips, etc. Over the next few years, Atlas learned skills such as running, gymnastics, tumbling, handstands, and dancing. In June 2021, Hyundai Motor Group and SoftBank Group announced that the former completed the acquisition of an 80% stake in Boston Dynamics.

The other is Tesla. Following the announcement of Tesla’s humanoid robot (Tesla Bot) plan in August 2021, in June this year, Musk said on Twitter that a Tesla Bot prototype will be launched on Tesla AI Day on September 30 this year. The Tesla humanoid robot, named Optimus, is 1.72 meters high and weighs 56.6 kilograms, similar to humans. . According to Tesla’s plans, the Optimus will begin production as early as 2023. In Musk’s view, it is possible to create a humanoid robot from the perspective of sensors and actuators, and there are two elements that are currently missing – sufficient intelligence and expanded production scale.

The development and accumulation of Web4.0 can just bring enough wisdom to the robot. If an AI robot can make the same actions and reactions as humans to every stimulus from the outside world, then it will be successful in simulating humans.

However, the structure of robots is definitely different from that of human beings. Robots made of machinery or other materials cannot simulate the flesh and blood of human beings, nor can they provide the real feelings that human beings need to get when they communicate. Perhaps hearing is the first to be successfully simulated by AI robots, and then vision can be simulated in part, but it is difficult to simulate touch, smell, taste, etc. It is precisely because of these difficulties that the AI ​​robot cannot have a hearty laugh when playing Texas Hold’em, and a nervous micro-expression after Bluff. These messages and feelings are equally important to the humans around them.

0

The brain-computer interface in the Web4.0 stage only realizes reading consciousness from the brain, but in the Web5.0 stage, the brain-computer interface also needs to realize the function of writing consciousness to the brain, so as to truly realize the interconnection of consciousness. The human brain itself is a very good computing and simulation system, a highly efficient Metaverse system, private and infinite possibilities. In dreams, human beings can recombine various scenes and characters that they have seen, heard, and imagined before to simulate a new scene. These combinations may conform to the laws of reality, and may transcend the constraints of reality. With only a small amount of input stimuli and guiding signals, the human brain can simulate rich scenarios, and scenarios like “Inception” may no longer be science fiction.

Feedback that is difficult for robots to simulate, such as touch, smell, and taste, can also generate similar stimuli in the human brain through input from a brain-computer interface. If humans can’t tell whether the source of this stimulus is a human or an interconnected machine AI, then the “Turing Test” of human-machine fusion should be considered passed.

Extended thinking

* In the era of Web5.0, will the flesh be replaced? 

Post-Web5.0 Era

Perhaps, the “cold-blooded” blockchain system can shine in the era of human-machine integration, and energy and computing power have become the hard currency of the era. Can human beings enter Web 3.0 and break the monopoly of Internet giants? Can humans coexist with machines in the Web 5.0 era?Technology is advancing faster and faster, but where does human consciousness go? Where is the human body going?

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/50-years-of-10000-character-texts-on-the-internet-deduce-web1-0-to-web5-0/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2022-07-18 11:08
Next 2022-07-18 11:10

Related articles