30,000 words to read about the Rust industry

Rust has been the most popular language on the StackOverflow language list for five years in a row.

Author: Zhang Handong

Table of Contents.

Before the text

Understanding the Rust Language

High Performance Like C / Cpp



Rust and Open Source

Shortcomings of the Rust Language

Rust Ecosystem Base Library and Toolchain

Rust Industry Application Inventory

Data Services

Cloud Native

Operating Systems

Tools and Software

Machine Learning


Client Development

Blockchain / Digital Currency

Other areas where Rust is revolutionizing

Inventory of companies using Rust in production environments




Author Introduction

Before the text
Rust is a general-purpose system-level programming language known for being GC-free, memory-safe, concurrency-safe, and high-performance. Privately developed by Graydon Hoare since 2008, sponsored by Mozilla in 2009, it was first released in 2010 as version 0.1.0 for the Servo engine, and version 1.0 was released on May 15, 2015.

Since its release, as of today, 2021, after six years of development, Rust has steadily increased and has gradually become more mature and stable.

Starting in 2016, Rust has been the most popular language on the StackOverflow language list for five consecutive years until 2021 [1].


On February 9, 2021, the Rust Foundation was announced. Huawei, AWS, Google, Microsoft, Mozilla, Facebook, and other leading tech giants joined the Rust Foundation as platinum members to work on promoting and developing the Rust language worldwide.

What is it about the Rust language that makes it so interesting to developers and giant companies?

This article attempts to answer this question by looking at two aspects of the Rust language itself and a community survey of Rust applications in the industry. We hope to provide a more comprehensive and intuitive impression of the current applications of Rust in all major areas through these simple but critical data for companies that want to choose Rust.

Note: All the data listed in this article is from publicly available content on the Internet.

Understanding the Rust Language
Programming language design has long been a contradiction between two seemingly irreconcilable desires.

Safe. We want a strongly typed system to statically eliminate large numbers of errors. We want automatic memory management. We want data encapsulation, so that we can enforce an invariant representation of objects for private variables and ensure that they will not be corrupted by untrusted code.

Control. At least for system programming programs like Web browsers, operating systems, or game engines, where constraints on performance or resources are an important issue, we want to understand the byte-level representation of data. We want to optimize the use of time and space in our programs using low-level programming techniques. We want to use bare metal when we need it.

However, the traditional view is that you can’t have both: languages like Java give us great security, but at the expense of control over the underlying layer. As a result, for many system programming applications, the only realistic option is to use a language like C or C++ that provides fine-grained control over resource management. However, gaining such control comes at a high cost. For example, Microsoft recently reported that 70% of the security vulnerabilities they fixed were attributed to memory security violations 33 [2], and were issues that could be excluded by strongly typed systems. Similarly, Mozilla reported that the vast majority of the critical bugs they found in Firefox were memory-related16 [3].

Wouldn’t it be nice if we could somehow have the best of both worlds: secure systems programming while having control over the underlying layers? Hence, the Rust language was born.

The official website describes Rust as: a language that empowers everyone to build reliable and efficient software.

There are three major advantages of the Rust language that are worth noting.

Rust is incredibly fast and extremely memory efficient. With no runtime or garbage collection, it is capable of handling particularly performance-critical services, runs on embedded devices, and integrates easily with other languages.

Reliability Rust’s rich type system and ownership model ensure memory safety and thread safety, allowing you to eliminate a wide range of errors at compile time.

Productivity: Rust has excellent documentation, a friendly compiler with clear error messages, and integrates best-in-class tools – package managers and build tools, intelligent auto-completion and type-checking support for multiple editors, automatic code formatting, and more.

Rust is low-level enough that it can be optimized like C for maximum performance if necessary.

The higher the level of abstraction, the easier the memory management, and the richer the available libraries, the more Rust programs code and do more things, but without control, this can lead to program bloat.

However, Rust programs are also very well optimized, sometimes better than C, which lends itself to minimal code at the byte-by-byte pointer-by-pointer level, while Rust has the power to efficiently combine multiple functions or even entire libraries together.

But the biggest potential is the ability to parallelize most Rust code fearlessly, even though the risk of parallelizing equivalent C code is very high. In this respect, the Rust language is a more mature language than C.

The Rust language also supports highly concurrent, zero-cost asynchronous programming, and Rust should be the first system-level language to support asynchronous programming.

High performance comparable to C / Cpp
Rust vs C


Rust vs Cpp


Rust vs Go


The runtime speed and memory usage of programs written in Rust should be about the same as programs written in C, but the general programming style of the two languages is different and it is difficult to generalize their performance.

In summary.

Abstraction is a double-edged sword. the Rust language is more abstract than C, and abstraction hides code that is not as optimized, which means that the default implementation of Rust code does not perform the best. Unsafe Rust is a high-performance export.

Rust is thread-safe by default, eliminating data contention and making multi-threaded concurrent programming more useful.

Rust is indeed faster than C in some ways. In theory, C can do everything. But in practice, C has lower abstraction power, is less modern, and is less efficient to develop. C can be made faster than Rust in these areas if the developer has unlimited time and effort.

Because C is good enough to represent high performance, here are some of the similarities and differences between C and Rust, respectively. If you are familiar with C/Cpp, you can also evaluate Cpp and Rust against this comparison.

Rust and C are both direct hardware abstractions

Both Rust and C are direct abstractions of hardware, and both can be considered a “portable assembler”.

Both Rust and C control the memory layout of data structures, integer sizes, stack and heap memory allocation, indirect addressing of pointers, etc., and are generally translated into understandable machine code with little “magic” inserted by the compiler.

Even though Rust has higher-level constructs than C, such as iterators, traits, and smart pointers, they are designed to be predictably optimized into simple machine code (aka “zero-cost abstraction”).

The memory layout of Rust’s types is simple; for example, the growable strings String and Vec are exactly {byte*, capacity, length}. Rust does not have any notion of a move or copy constructor like in Cpp, so passing objects is guaranteed to be no more complicated than passing pointers or memcpy so passing objects is guaranteed to be no more complicated than passing pointers or memcpy.

Rust borrowing checks are simply a static analysis of references in the code by the compiler. Lifetime information is completely abstracted long before the MIR is generated.

Instead of traditional exception handling, Rust uses return value-based error handling. But you can also use Panic to handle exception behavior like in Cpp. It can be disabled at compile time (panic = abort), but even then, Rust doesn’t like to mix it up with Cpp exceptions or longjmp.

The same LLVM backend

Rust has good integration with LLVM, so it supports link-time optimizations, including ThinLTO, and even inlining across C/C++/Rust language boundaries. There is also support for Profile-guided Optimization (PGO). Although rustc generates a more lengthy LLVM IR than clang, the optimizer can still handle it well.

C is faster to compile with GCC than with LLVM, and there are now people in the Rust community working on a Rust front end for GCC.

In theory, because Rust has stricter immutability and aliasing rules than C, it should have better performance optimizations than C, but in practice it doesn’t work out that way. Optimization beyond C is currently a work in progress in LLVM, so Rust still hasn’t reached its full potential.

All allow for manual optimization, with some minor exceptions

Rust’s code is sufficiently primitive and predictable that it can be manually tuned to what assembly code it is optimized to.

Rust supports SIMD and has good control over inlining and calling conventions.

Rust is similar enough to C that some of the analysis tools for C can often be used with Rust.

In general, if performance is absolutely critical and you need to squeeze every last bit of performance out of a manual optimization, then optimizing Rus t is not much different than optimizing C.

However, there is no particularly good alternative to Rust for some of the more underlying features.

goto. goto is not provided in Rust, but you can use the break tag of a loop instead. goto is generally used in C to clean up memory, but Rust does not need goto because of deterministic destructions. there is, however, a non-standard goto extension that is more useful for performance optimization.

Stack memory allocation alloca and C99 variable length arrays can save memory space and reduce the number of memory allocations. But these are controversial even in C, so Rust stays away from them.

Some of the overhead of Rust compared to C

Without hand optimization, Rust also has some overhead due to its abstract representation.

Rust lacks implicit type conversion and usize-only indexing, which leads developers to use only such types, even if only smaller data types are needed. indexing with usize is easier to optimize on 64-bit platforms without worrying about undefined behavior, but the extra bit bits can put more pressure on registers and memory. In C, on the other hand, you have the option of 32-bit types.

Strings in Rust will always carry a pointer and length. But many functions in C code only take pointers regardless of size.

Like for i in 0… .len {arr[i]}, performance depends on whether the LLVM optimizer can prove length matching. Sometimes it can’t, and bounds checking can inhibit automatic vectorization.

C is more liberal and has a lot of “smart” tricks for using memory, but not so much in Rust. However, Rust still gives a lot of control over memory allocation and can do basic things like memory pooling, merging multiple allocations into one, preallocating space, and so on.

In cases where you are not familiar with Rust borrow checking, you may be able to use Clone to avoid using references.

Rust’s standard library I/O is not cached, so it needs to be wrapped with a BufWriter. This is why some people say that Rust doesn’t write code as fast as Python, because 99% of the time is spent on I/O.

Executable Size

Every operating system has some standard C libraries built in, with about 30MB of code, and C executables that are “free” to use.

A small “Hello World” level C executable cannot actually print anything, it only calls the printf provided by the operating system.

This is not the case with Rust, which bundles its own standard libraries (300KB or more) with the Rust executable. Fortunately, this is a one-time overhead that can be reduced.

For embedded development, you can turn off the standard library and use “no-std” and Rust will generate “bare” code.

On a per-function basis, Rust code is about the same size as C, but with a “generic bloat” problem. Generic functions have optimized versions for each type they use, so it is possible to have 8 versions of the same function, and the cargo-bloat[4] library helps to detect these problems.

It is very easy to use dependencies in Rust. Similar to JS/npm, small and single-purpose packages are now recommended, but they do grow. the cargo-tree command is useful for trimming them down.

Some of the ways in which Rust slightly outperforms C

To hide implementation details, C libraries often return opaque pointers to data structures and ensure that there is only one copy of each instance of the structure. Rust’s built-in privacy, single-ownership rules, and coding conventions allow the library to expose its objects without indirection so that the caller can decide whether to put them on the heap or on the stack. Objects on the stack can be actively or completely optimized.

By default, Rust can inline functions from standard libraries, dependencies, and other compilation units.

Rust will reorder structure fields to optimize memory layout.

Strings carry size information, making length checking fast. and allows in-place generation of substrings.

Similar to C++ templates, generic functions in Rust are monomorphic, generating copies of different types, so functions like sort and containers like HashMap are always optimized for the appropriate type. For C, the choice must be between modifying macros or less efficient functions that handle void* and runtime variable sizes.

Rust’s iterators can be combined into chains that are optimized together as a unit. Thus, instead of a series of calls to it.buy().use().break().change().mail().upgrade(), you can call it.buy().use().break().change().mail().upgrade() to write to the same buffer multiple times.

Similarly, with the Read and Write interfaces, it is possible to receive some uncached stream data, perform CRC checksums on the stream, then transcode it, compress it, and write it to the network, all in a single call. While it should be possible to do this in C, it will be difficult to do without generics and traits.

The Rust standard library has high-quality containers and optimized data structures built into it, making it easier to use than C.

Rust’s serde is one of the fastest JSON parsers in the world and is a great experience to use.

Where Rust is clearly superior to C

There are two main points.

Rust eliminates data contention, is inherently thread-safe, and frees up multi-threaded productivity, which is where Rust is clearly superior to languages like C / Cpp.

The Rust language supports asynchronous and highly concurrent programming.

Rust supports safe compile-time computation.

Thread safety

Even in third-party libraries, Rust enforces thread safety for all code and data, even if the authors of that code don’t pay attention to thread safety. Everything follows a specific thread-safety guarantee or does not allow cross-thread usage. When you write code that is not thread-safe, the compiler will point out exactly what is unsafe.

There are already many libraries in the Rust ecosystem such as data parallelism, thread pools, queues, tasks, lock-free data structures, etc. With the help of such components, and the strong safety net of the type system, it is perfectly easy to implement concurrent/parallelized Rust programs. In some cases, it is possible to replace iter with par_iter and it will work as long as it can be compiled! This is not always a linear speedup (Amdahl’s law is brutal), but often a relatively small amount of work can speed up by a factor of 2 to 3.

Extension: Amdahl’s law, a rule of thumb in computer science, is named after Gene Amdahl. It represents the ability of a processor to improve efficiency after parallel computation.

There is an interesting difference between Rust and C when it comes to documenting thread safety.

Rust has a glossary of terms used to describe specific aspects of thread safety, such as Send and Sync, guards and cells.

For the C library, there is no such statement: “You can allocate it on one thread and release it on another thread, but you cannot use it from both threads at the same time”.

Depending on the data type, Rust describes thread safety, which can be generalized to all functions that use them.

For C, thread safety involves only a single function and configuration flags.

Rust’s guarantees are usually provided at compile time, at least unconditionally.

For C, it is common to say “this is thread-safe only if the turboblub option is set to 7”.

Asynchronous concurrency

The Rust language supports the async/await asynchronous programming model.

This programming model, based on a concept called Future, also known as Promise in JavaScript, represents a value that has not yet been derived, and you can perform various operations on it before it is resolved to derive that value. Not much work has been done with Future in many languages, and this implementation supports many features such as Combinator, especially the ability to build on it with a more ergonomic async/await syntax.

Future can represent a variety of things, and is particularly useful for representing asynchronous I/O: when you initiate a network request, you will immediately get a Future object, and once the network request completes, it will return any values the response may contain; you can also represent things like “timeout”, which is a term used to refer to the timeout of a network request. You can also represent something like “timeout”, which is actually a Future that is resolved after a specific amount of time has passed; even CPU-intensive work that is not I/O work or needs to be put into some thread pool can be represented by a Future that will be resolved after the thread pool finishes its work. This Future will be solved after the thread pool finishes its work.

The problem with Future is that its representation in most languages is this callback-based approach, where you can specify what callback function to run after the Future has been resolved. That is, Future is responsible for figuring out when it’s resolved, and it runs whatever your callback is; and with all the inconveniences that are built into this model, it’s very hard to use because there have been a lot of attempts by developers who have had to write a lot of allocative code and use dynamic dispatching; in fact, each callback that you try to dispatch has to get its own In fact, each callback you try to dispatch must get its own separate storage space, such as crate objects, heap memory allocations, and these allocations and dynamic dispatching are ubiquitous. This approach does not satisfy the second principle of zero-cost abstraction, and if you were to use it, it would be much slower than writing it yourself, so why would you use it?

The solution in Rust is different. Instead of the Future dispatching the callback function, the Future is polled by a component called the executor, which may return “Pending”, or it may be resolved and return The Future may return “Pending” or “Ready” if it is resolved. This model has many advantages. One of the advantages is that you can cancel a Future very easily, because canceling a Future only requires stopping the holding of the Future, whereas with a callback-based approach, it is not as easy to cancel and stop it via dispatch.

It also allows us to establish really clear abstraction boundaries between different parts of the program. Most other Future libraries have event loops, which is how you schedule your Future to perform I/O, but you don’t actually have any control over it.

In Rust, the boundaries between the components are very neat, with the executor taking care of scheduling your Future, the reactor handling all the I/O, and then your actual code. So the end user can decide for themselves what executor to use and what reactor they want to use, thus gaining more control, which is really important in a system programming language.

And the most important real advantage of this model is that it allows us to implement this state machine style Future in a really zero cost perfect way. That is, when you write Future code that is compiled into actual native code, it acts like a state machine; in that state machine, each I/O pause point has a variant, and each variant holds the state needed to resume execution.

The really useful thing about this Future abstraction is that we can build other APIs on top of it. It is possible to build state machines by applying these combinator methods to Future, and they work in a similar way to adapters for Iterators (e.g. filter, map). But there are some drawbacks to this approach, especially things like nested callbacks, which are very poorly readable. That’s why there is a need to implement async / await asynchronous syntax.

There is already a mature tokio[5] runtime ecosystem in the Rust ecosystem that supports epoll and other asynchronous I/O. If you want to use io_uring, you can use Glommio[6], or wait for tokio to support io_uring. You can even build your own runtime using the async_executor[7] and async-io[8] provided by the smol runtime.

Compile-Time Computation

Rust can support compile-time constant evaluations similar to Cpp. This is clearly superior to C.

It is not as powerful as Cpp yet, but it is under constant maintenance.

Why is it so careful to support compile-time computation in Rust? Rust compile-time evaluation is not as free and easy to abuse as Cpp.

In June 2020, five academics from three universities presented a study at the ACM SIGPLAN International Conference (PLDI’20) that provides a comprehensive survey of security flaws in open source projects that have used the Rust language in recent years. The study investigated five software systems developed using the Rust language, five widely used Rust libraries, and two vulnerability databases. In total, the survey covered 850 uses of unsafe code, 70 memory security flaws, and 100 thread security flaws.


In their investigation, the researchers not only looked at all bugs reported in the vulnerability database and publicly reported bugs in the software, but also looked at the commit records in all open source software code repositories. Through manual analysis, they defined the types of bugs fixed by the commits and categorized them into the corresponding memory safety/thread safety issues. All the investigated issues were organized into a public Git repository: https://github.com/system-pclub/rust-study%5B9%5D

Note on the findings.

The Rust language’s safe code is very effective at checking for spatial and temporal memory safety issues, and all of the memory safety issues that appear in stable versions are related to unsafe code.

Although memory safety issues are all related to unsafe code, a large number of issues are also related to safe code. Some problems even stem from coding errors in the safe code rather than the unsafe code.

Thread safety problems, whether blocking or non-blocking, can occur in safe code, even if the code is fully compliant with the rules of the Rust language.

A large number of problems are caused by coders who do not properly understand the lifecycle rules of the Rust language.

There is a need to build new defect detection tools for typical problems in the Rust language.

So how is the security of Rust ensured behind this survey report, and why is Unsafe Rust Unsafe?

Ownership: The Rust Language Memory Security Mechanism

Rust’s design draws heavily on the best of academic research on secure systems programming. In particular, the most distinctive feature of the Rust design, compared to other mainstream languages, is the adoption of an ownership type system (often referred to in the academic literature as an affine or substructure type system36 [10]).

The ownership mechanism is the semantics and model of safe programming expressed by the Rust language with the help of a type system that carries its idea of “memory safety”.

The memory insecurity problems that the ownership mechanism aims to address include

Referencing null pointers.

Use of uninitialized memory.

Use after release, i.e., use of dangling pointers.

Buffer overflows, such as array out-of-bounds.

Illegal release of already freed pointers or unallocated pointers, i.e., duplicate releases.

Note that memory leaks are not part of the memory safety problem, so Rust does not address memory leaks either.

To ensure memory security, the Rust language establishes a strict model for secure memory management: the

Ownership system. Each allocated memory has a pointer that takes exclusive ownership of it. Only when that pointer is destroyed can its corresponding memory be released with it.

Borrowing and lifecycle. Each variable has its own lifecycle, and once the lifecycle is exceeded, the variable is automatically freed. In the case of borrowing, dangling pointers, that is, use after release, can be prevented by marking the lifecycle parameter for the compiler to check.

The ownership system also includes the RAII mechanism, borrowed from modern C++, which is the cornerstone of Rust’s GC-free but safe memory management.

Once the safe memory management model is established, it is expressed in a type system that Rust borrows from Haskell’s type system, which has the following features.

No null pointers

immutable by default


Higher-order functions

algebraic data types

Pattern matching

Generic types

trait and associative types

Local Type Inference

For memory safety, Rust also has the following unique features.

Affine Type, which is used to express the semantics of Move in Rust ownership.

Borrowing, lifecycle.

With the power of the type system, the Rust compiler can check types at compile time to see if they satisfy the safe memory model, detecting memory insecurity at compile time and effectively preventing undefined behavior from occurring.

Memory-safe bugs and concurrency-safe bugs arise for the same intrinsic reasons, both due to improper access to memory. Rust also addresses concurrency safety with a powerful type system loaded with ownership, and the Rust compiler checks for all data contention issues in multi-threaded concurrent code at compile time through static check analysis.

Unsafe Rust: Delineating Security Boundaries

In order to integrate well with the existing ecosystem, Rust supports a very convenient and zero-cost FFI mechanism, is C-ABI compatible, and divides the Rust language into Safe Rust and Unsafe Rust at the language architecture level.

Unsafe Rust deals exclusively with external systems, such as operating system kernels. The reason for this division is that the Rust compiler has the ability to check and track the security state of other external language interfaces, so it is up to the developer to ensure security.

The ultimate goal of Rust is not to completely eliminate those danger points, because at some point we need to be able to access memory and other resources. In fact, the goal of Rust is to abstract out all the unsafe elements. When thinking about security, you need to think about the “attack surface”, or the parts of the program that we can interact with. Things like parsers are a big attack surface because.

They are usually accessible to attackers.

The data provided by the attacker can directly affect the complex logic that parsing usually requires.

You can take this further by breaking down the traditional attack surface into an “attack surface” (the part of the program code that can be directly affected) and a “security layer”, which is the code that the attack surface depends on, but is inaccessible and potentially buggy. In C, they are the same: arrays in C are not abstract at all, so if you read a variable number of items, you need to make sure that all the invariants remain unchanged, because this is operating in the unsafe layer, where errors can occur.

So, Rust provides unsafe keywords and unsafe blocks to explicitly distinguish between safe code and unsafe code that accesses external interfaces, and also provides developers with the convenience of debugging errors. safe Rust means that the developer will trust the compiler to be able to guarantee safety at compile time, and Unsafe Rust means to let the compiler trust the developer Safe Rust means that developers will trust the compiler to be safe at compile time, while Unsafe Rust means that the compiler will trust the developer to be safe.

Where there are bugs, there are bugs, and the Rust language has been carefully designed to give the compiler control over what the machine can check, and the developer control over what the machine cannot.

Safe Rust ensures that the compiler maximizes memory safety and prevents undefined behavior at compile time.

Unsafe Rust is used to alert developers that code developed at this time may cause undefined behavior, so please be careful! People and compilers share the same “safety model”, trust each other, and harmonize with each other to maximize the potential for human bugs.

Unsafe Rust is the security boundary of Rust. The world is Unsafe by nature. You can’t avoid it. It’s true that Unsafe Rust relies on people to keep it safe, just like C/C++. But it requires more people.

It also gives developers an Unsafe boundary, which is actually a security boundary. It takes the minefield in your code and explicitly marks it out. If you review the team code, you can find the problem faster. That in itself is a kind of safety. In contrast, in C++, every line of code you write is Unsafe because it doesn’t have such obvious boundaries (Unsafe blocks) as Rust.

Here are my five simple specifications for using Unsafe, so that you can make the trade-off.

Use Safe Rust if you can.

Use Unsafe Rust if you can for performance.

Make sure not to generate UB when using Unsafe Rust, and try to determine its security boundaries and abstract it as a Safe method.

If it cannot be abstracted as Safe, it needs to be marked as Unsafe with documentation of the conditions that generate UB.

For the code of Unsafe, you can focus on review.

So, Unsafe can also cause memory safety or logic bugs if it is not used properly. Therefore, it is crucial to learn how to abstract Unsafe Rust for safety.

However, the Rust community ecosystem has a Rust Security Working Group, which provides a set of tools such as cargo-audit [11] and maintains the RustSecurity security database repository [12] of security issues found in the Rust ecosystem community. Security issues for dependent libraries in Rust projects can be easily checked.

Programming language productivity, can be assessed in roughly three ways.

Learning curve.

Language engineering capabilities.

Domain ecology.

Learning curve

The level of the learning curve varies depending on the level of the individual. Here is a list of things to keep in mind when learning Rust from different levels.

Completely zero-based developers: master the basic computer architecture, understand the abstraction of the Rust language and the hardware/OS layer, understand the core concepts of the Rust language and its abstraction patterns, choose an applicable area of the Rust language for hands-on training, and improve your proficiency and understanding of the Rust language through practice, while mastering the domain knowledge.

Basic C: Since C developers do not have a good understanding of high-level language abstraction, we focus on understanding the Rust ownership mechanism, including ownership semantics, lifecycle, and borrowing checks. Understand the abstraction patterns of the Rust language, mainly types and traits; and the OOP and functional language features of Rust itself.

C++ basics: C++ developers have a good understanding of the ownership of the Rust language and focus mainly on Rust’s abstract patterns and functional language features.

Java/Python/Ruby: Focus on understanding the Rust ownership mechanism, abstract patterns, and functional programming language features.

Go base: Go language developers can easily understand Rust’s type and trait abstraction patterns, but Go is also a GC language, so the ownership mechanism and functional language features are their learning focus.

Haskell base: Haskell developers have a good understanding of the functional features of Rust language, and focus on ownership mechanism and OOP language features.

Therefore, for developers with a certain foundation, several key concepts to master in learning the Rust language are.

1, Rust ownership mechanism, including the semantics of ownership, life cycle and borrowing checks

The ownership mechanism is the core feature of the Rust language, which ensures memory safety in the absence of a garbage collection mechanism, so for developers used to GC, understanding Rust ownership is the most critical part, keeping in mind these three points.

Every value in Rust has a variable called its owner (owner).

A value has one owner and one owner only.

When the owner (variable) leaves the scope, the value is discarded. This involves concepts such as lifecycle and borrowing checks, and is a relatively hard nut to crack.

2, Rust language abstraction pattern, mainly type and trait. trait borrowed from the Haskell Typeclass, it is the abstraction of type behavior, can be commonly compared to other programming languages interface, it tells the compiler what features a type must provide language features. When using to follow the consistency, can not define conflicting implementations.

3, OOP language features. Familiar with the four common features of object-oriented programming (OOP): object, encapsulation, inheritance and polymorphism, you can better understand some of the features of Rust, such as impl, pub, trait and so on.

The design of the Rust language is heavily influenced by functional programming, and people who are not good at math may be deterred by seeing functional features, because the most important feature of functional programming languages is to write the arithmetic process as a series of nested function calls, in Rust, mastering closures and iterators is an important part of writing high-performance Rust code in a functional language style.

Language Engineering Capabilities

Rust is ready to develop industrial-grade products.

To ensure security, Rust introduces a robust type system and ownership system to ensure not only memory safety, but also concurrency safety without sacrificing performance.

To ensure support for hard real-time systems, Rust borrows deterministic destructions, RAII, and smart pointers from C++ to automate and deterministically manage memory, thus avoiding the introduction of GC and thus the “world pause” problem. These are borrowed from C++, but are much more concise than C++.

To ensure robustness, Rust revisits the error handling mechanism. There are three general categories of abnormalities in everyday development: failures, errors, and exceptions. But with procedural-oriented languages like C, developers can only handle errors with statements like return value and goto, and there is no uniform error-handling mechanism. High-level languages like C++ and Java introduce exception handling mechanisms, but they do not provide a syntax that can effectively distinguish between normal logic and error logic, but only unify the global processing, which leads developers to treat all non-normal cases as exceptions, which is not conducive to the development of robust systems. This is not conducive to robust system development. And exception handling also brings a relatively large performance overhead.

The Rust language provides special handling methods for each of these three types of non-normal cases, allowing developers to choose between them.

For failure cases, assertion tools can be used.

For errors, Rust provides a hierarchical approach to error handling based on return values, such that Option can be used to handle cases where null values may exist, while Result is dedicated to handling errors that can be reasonably resolved and need to be propagated.

For exceptions, Rust treats them as problems that cannot be reasonably solved, providing a thread panic mechanism that allows threads to safely exit in the event of an exception.

With such a refined design, developers can then reasonably handle non-normal situations at a finer granularity and ultimately write more robust systems.

To provide flexible architectural capabilities, Rust uses traits as the basis for zero-cost abstraction. Traits are combinatorial rather than inherited, giving developers the flexibility to architect both tightly and loosely coupled systems, and Rust also provides generics to express type abstraction, which, combined with the trait feature, gives Rust static polymorphism and code reuse capabilities. Generics and trait give you the flexibility to use a variety of design patterns to reshape your system architecture.

To provide powerful language extensions and development efficiency, Rust introduces a macro-based metaprogramming mechanism, and provides two types of macros, declarative macros and procedural macros. Declarative macros are similar in form to C’s macro replacement, with the difference that Rust checks the code after the macro is expanded, giving it an advantage in terms of safety. Procedural macros give Rust powerful capabilities in code reuse and code generation.

To integrate well with the existing ecosystem, Rust supports a very convenient and zero-cost FFI mechanism, is C-ABI compatible, and splits the Rust language into Safe Rust and Unsafe Rust at the language architecture level. Unsafe Rust deals exclusively with external systems, such as operating system kernels. Unsafe Rust provides unsafe keywords and unsafe blocks that explicitly distinguish between safe code and unsafe code that accesses external interfaces. Safe Rust means that the developer will trust the compiler to be safe at compile time, while Unsafe Rust means that the compiler will trust the developer to be safe.

The Rust language has been carefully designed to give the compiler control over what the machine can check, and the developer control over what the machine cannot control. developers that the code developed at this point may cause undefined behavior, so please be careful! People and compilers share the same “safety model”, trust each other, and harmonize with each other to maximize the potential for human bugs.

To make it easier for developers to collaborate with each other, Rust provides a very useful package manager, Cargo [13], which compiles and distributes Rust code in packages (crates) and provides many commands for developers to create, build, distribute, and manage their own packages. plugins to meet more needs. For example, the official rustfmt and clippy tools can be used to automatically format code and find “bad flavors” in code, respectively. Cargo also inherently embraces the open source community and Git, supporting one-click publishing of written packages to the crates.io website for others to use.

To make it easier for developers to learn Rust, the official Rust team has made the following efforts.

Separate community working groups have been set up to write the official Rust Book, as well as other documentation of varying depth, such as compiler documentation, nomicon books, and so on. Even organized a free community teaching event, Rust Bridge, encouraged community blogging, and more.

The documentation for the Rust language supports the Markdown format, so the Rust standard library documentation is expressive. The documentation of many third-party packages in the ecosystem has also been enhanced.

A very useful online Playground tool is available for developers to learn, use, and share code.

The Rust language is early and bootstrapped, making it easy for learners to read the source code to understand its inner workings and even contribute to it.

The Rust core team is constantly improving Rust, working to make it more friendly, less mentally taxing for beginners, and slower on the learning curve. For example, the NLL feature was introduced to improve the borrowing checking system, allowing developers to write more intuitive code.

While borrowing much of the type system from Haskell, the Rust team deliberately de-academicizes the language when designing and promoting features to make Rust concepts more accessible.

Provides support for a hybrid programming paradigm based on the type system, providing powerful and concise abstraction that greatly improves developer productivity.

Provides a more rigorous and intelligent compiler. Based on the type system, the compiler rigorously checks for hidden problems in the code, and the official Rust team is constantly optimizing the compiler’s diagnostic information, making it easier for developers to locate errors and quickly understand why they occur.

VSCode/Vim/Emacs + Rust Analyzer has become a standard part of Rust development. Of course, the JetBrains family of IDEA/Clion also provides strong support for Rust.

Rust and Open Source
The Rust language itself, as an open source project, is one of the jewels of modern open source software.

All languages that came before Rust were used only for commercial development, but Rust changed that. For the Rust language, the Rust open source community is also part of the language. At the same time, the Rust language belongs to the community.

The Rust team is made up of Mozilla and non-M Mozilla members, with over 1900 contributors to the Rust project to date. the Rust team is divided into core groups and other domain working groups, and for the goals of Rust 2018, the Rust team is divided into an embedded working group, a CLI working group, a web working group, and a WebAssembly working group. The Rust team is divided into the Embedded Working Group, the CLI Working Group, the Web Working Group, and the WebAssembly Working Group.

Designs in these areas go through an RFC process first, and for changes that don’t need to go through the RFC process, just submit a Pull Request to the Rust project repository. All processes are transparent to the community, and contributors can participate in the review, but the final decision rests with the core group and related domain working groups. The MCP was also introduced later to streamline the FCP process.

The Rust team maintains three release branches: Stable, Beta, and Nightly. The stable and beta versions are released every 6 weeks. Language features or standard library features marked as Unstable and Feature Gate can only be used in the development version.

The Rust team has also been exploring new open source governance options since the inception of the Rust Foundation.

Shortcomings of the Rust Language
While Rust has many advantages, it certainly has some disadvantages.

Rust is slow to compile. Although Rust officials have been improving the speed of Rust compilation, including incremental compilation support, the introduction of a new compilation backend (cranelift), parallel compilation and other measures, but still slow. Also, incremental compilation is currently buggy.

Steep learning curve.

IDE support is not good enough. For example, support for macro code is not very good.

Lack of various detection tools for memory insecurity issues specific to the Rust language.

Rust Ecosystem Base Library and Toolchain
The Rust ecosystem is becoming increasingly rich, with many base libraries and frameworks released as packages (crates) to crates.io [14], which has 62,981 crates and 7,654,973,261 downloads.

The most popular crates on crates.io by package usage scenario are as follows.

command-line tools (3133 crates)

no-std library (2778 crates)

Development tools (testing/ debug/ linting/ performance checking, etc., 2652 crates)

Web Programming (1776 crates)

API bindings (specific api wrappers for Rust use, such as http api, ffi related api, etc., 1738 crates)

Network Programming (1615 crates)

Data Structures (1572 crates)

Embedded Development (1508 crates)

Cryptography (1498 crates)

Asynchronous Development (1487 crates)

Algorithms (1200 crates)
Scientific computing (including physics, biology, chemistry, geography, machine learning, etc., 1100 crates)
In addition, there are other categories such as WebAssembly, coding, text processing, concurrency, GUI, game engines, visualization, template engines, parsers, OS bindings, and many other libraries.

Commonly used well-known base libraries and toolchains

A number of excellent base libraries have emerged, all of which can be found on the crates.io home page. Here is a list of some of them.

Serialization / deserialization: Serde [15]

Command line development: clap [16]/ structopt [17]

asynchronous/web/web development: tokio [18] / tracing [19] /async-trait [20] / tower [21]/ async-std [22] tonic [23]/ actix-web [24]/ smol [25]/ surf [ 26]/ async-graphql [27]/ warp /[28] tungstenite [29]/ encoding_rs [30]/ loom [31]/ Rocket [32]

FFi development: libc [33]/ winapi [34]/ bindgen [35]/ pyo3 [36]/ num_enum [37]/ jni [38]/ rustler_sys [39]/ cxx [40]/ cbindgen [41]/ autocxx -bindgen [42]

API development: jsonwebtoken [43]/ validator [44]/ tarpc [45]/ nats [46]/ tonic [47]/ protobuf [48]/ hyper [49]/ httparse [50]/ reqwest [51 ] / url [52]

Parsers: nom [53]/ pest [54]/ csv [55]/ combine [56]/ wasmparser [57]/ ron [58]/ lalrpop [59]

Cryptography: openssl [60]/ ring [61]/ hmac [62]/ rustls [63] / orion [64] / themis [65] / RustCrypto [66]

WebAssembly: wasm-bindgen [67] / wasmer [68] / wasmtime [69] / yew [70]

Database development: diesel [71]/ sqlx [72]/ rocksdb [73]/ mysql [74]/ elasticsearch [75]/ rbatis [76]

Concurrency: crossbeam [77]/ parking_lot [78]/ crossbeam-channel [79]/ rayon [80]/ concurrent-queue [81]/ threadpool [82] / flume [83]

Embedded development: embedded-hal [84]/ cortex-m [85]/ bitvec [86]/ cortex-m-rtic [87]/ embedded-dma [88]/ cross [89]/ Knurling Tools [90]

Tests: static_assertions [91]/ difference [92]/ quickcheck [93]/ arbitrary [94]/ mockall [95]/ criterion [96]/ proptest [97] / tarpaulin [98]/ fake-rs [99]

Multimedia development: rust-av [100]/ image [101]/ svg [102]/ rusty_ffmpeg [103]/ Symphonia [104]/

Game engines and base components: glam [105]/ sdl2 [106]/ bevy [107]/ amethyst [108]/ laminar [109]/ ggez [110]/ tetra [111]/ hecs [112]/ simdeez [113]/ rg3d [ 114] / [rapier](https://github.com/dimforge/ra pier “rapier”) / Rustcraft[115] Nestadia[116]/ naga[117]/ Bevy Retro[118]/ Texture Generator[119] / building_blocks[120] / rpg-cli [121]/ macroquad[122]

TUI/GUI development: winit [123]/ gtk [124]/ egui [125]/ imgui [126]/ yew [127]/ cursive [128]/ iced [129]/ fontdue [130]/ tauri [131]/ druid [132]

Rust Industry Applications Inventory
Rust is a general-purpose, high-level system-level programming language that basically covers the application areas of C/Cpp/Java/Go/Python at the same time.

Specifically, the application area of Rust currently covers the following domains.

Here’s an inventory of domestic and international Rust projects in different domains. By providing data about code volume, team size, and project cycle, we hope to give you a more intuitive understanding of Rust domain applications and development efficiency.

Data Services
Data services include database, data warehousing, data streaming, big data, etc.

TiKV (domestic / open source / distributed database)

Key words: database / distributed system / CNCF


TiKV [133] is an open source distributed transactional Key-Value database focused on providing a reliable, high-quality, and practical storage architecture for next-generation databases. TiKV was originally developed by PingCAP team in Currently, TiKV has been applied online in many head enterprises in many industries, such as Zhihu, One Point Information, Shopee, Meituan, Jingdong Cloud, and Transition.

TiKV uses Raft consistency algorithm to achieve consistency among multiple copies of data, and locally uses RocksDB storage engine to store data, while TiKV supports automatic data slicing and migration. TiKV supports distributed transactions.

TiKV was announced as a sandbox cloud native project by CNCF in August 2018, and was promoted from sandbox to incubation project in May 2019.

Code and Team Size

The TiKV project contains approximately 300,000 lines of Rust code (including test code).

TiKV is a global open source project, you can see the team size from the contributor list [134]. the TiKV organization also contains some Go/Cpp projects, this does not count, only the size of the people involved in the Rust project.

Main developer: about 20 people.

Community contribution: 300+ people.

Project cycle

TiKV is evolved as TiDB’s underlying storage, TiDB is developed for Go, TiKV is developed for Rust.

January 2016 as TiDB’s underlying storage engine design and development .

The first version was released as open source in April 2016.

On October 16, 2017, TiDB released GA version (TiDB 1.0) and TiKV released 1.0.

April 27, 2018, TiDB released version 2.0 GA, TiKV released 2.0.

June 28, 2019, TiDB releases version 3.0 GA, TiKV releases 3.0.

May 28, 2020, TiDB releases version 4.0 GA, TiKV releases 4.0.

On April 07, 2021, TiDB releases 5.0 GA and TiKV releases 5.0.


Some of you may be concerned about how efficient Rust development is, and want to quantify it, especially to compare the development efficiency of other languages like C/ Cpp / Go.

I think it is very difficult to quantify development efficiency, especially when compared to other languages. Let’s look at this from a different perspective, for example, from the perspective of agile project iteration management. If a language can meet the daily needs of agile development iterations and can help complete the evolution of the product, that’s enough to show the development efficiency of the language.

It is known that the number of Go developers in PingCAP is four or five times higher than the number of Rust developers, and the workload is about the same ratio. From the above data, we can see that the Rust project (TiKV) can still steadily keep up with the iterative pace of the Go project (TiDB), indicating that the development efficiency of Rust is still sufficient to meet modern development needs.

TensorBase (domestic/open source/real-time data warehouse)

Key words: real time data warehouse / startup / angel round


TensorBase[135] is a startup project launched by Dr. Mingjian Jin in August 2020 to re-build a real-time data warehouse under Rust from a modern and new perspective with open source culture and approach to serve data storage and analysis in this era of massive data. the TensorBase project has now received an angel round investment from world-renowned startup investment TensorBase project has received angel round investment from a world-renowned venture capital firm.

Code and Team Size

Because TensorBase is built on Apache Arrow [136] and Arrow DataFusion [137], the code statistics exclude the dependencies of these two projects.

TensorBase has over 54,000 lines of core code.

Team size.

Lead developer: 1 person.

Community contribution: 13 people.

Because it is a new project, the open source community is still under construction.

Project cycle

TensorBase releases at a time cadence, not a semantic version. The iteration cycle is expected to be one major release per year and one minor release per month.

From the official release on April 20, 2021, to the recent June 16, maintain this rhythm.

Timely Dataflow (foreign/open source/real-time dataflow)

Key words: Dataflow/ Distributed Systems/ Startups


Timely Dataflow [138] is a modern Rust implementation based on this Microsoft Timely Dataflow paper: “Naiad: A Timely Dataflow System” [139]. It is an open source product of clockworks.io [140].

Naiad introduces the concept of timestamp, which gives a very low level model that can be used to describe arbitrarily complex streaming computations.

Timely dataflow gives a completely time-based abstraction that unifies streaming and iterative computation. Timely Dataflow can be used when you need to process streaming data in parallel and need iterative control.

Code and Team Size

Rust has about 13,000 lines of code.

Team size.

Lead developer: 4 people.

Community contribution: 30+ people.

Project cycle

September 7, 2017, version 0.3.0.

June 28th, 2018, version 0.6.0.

September 16, 2018, version 0.7.0.

December 3rd, 2018, version 0.8.0.

March 31, 2019, version 0.9.0.

July 10, 2019, version 0.10.0.

March 10, 2021, version 0.12.0.

In addition to Timely Dataflow, the team also maintains Differential Dataflow [141], which is built on top of Timely Dataflow and iterates in parallel with Timely Dataflow.

Noria (foreign/academic research/open source/database)

Key words: database/academic paper projects


Noria [142] is a new streaming dataflow system intended as a fast storage backend for heavy-duty Web applications based on the PhD thesis [144] of Jon Gjengset [143] at MIT, also referencing the OSDI’18 paper [145]. It is similar to a database, but supports precomputing and caching relational query results in order to speed up queries. noria automatically keeps cached results as underlying data, stored in persistent base tables. noria uses partial state data flow to reduce memory overhead and supports dynamic, runtime data flow and query changes.

Code and Team Size

Rust has approximately 59,000+ lines of code.

Team size.

Lead contributor: 2 people

Community contributors: 21

Project cycle

Because it is a personal academic research project, the release cycle is not that obvious.

Project cycle July 30, 2016 ~ April 30, 2020, a total of more than 5000commit.

Vector (foreign/open source/data pipeline)

Key words: data pipeline / distributed systems / startup

Vector [146] is a high-performance, end-to-end (proxy and aggregator) observable data pipeline built by Timer. It is open source and is 10x faster than all alternatives in the space (Logstash, Fluentd, and the like). Vector is currently used by companies like Douban, checkbox.ai, fundamentei, BlockFi, Fly.io, and others. Click here [147] for the official performance report, and here [148] to see companies currently using Vector in production environments.

Code and Team Size

Code volume is about 180,000 lines of Rust code.

Team size.

Lead developer: 9 people

Community Contribution: 140 people

Project cycle

March 22, 2019, initial release.

June 10, 2019, version 0.2.0 release

July 2, 2019, version 0.3.0 released

September 25, 2019, version 0.4.0 release

October 11, 2019, version 0.5.0 released

December 13, 2019, version 0.6.0 released

January 12, 2020, version 0.7.0 released

February 26, 2020, version 0.8.0 released

April 21, 2020, version 0.9.0 released

July 23, 2020, version 0.10.0 released

March 12, 2021, 0.11.0 ~ 0.12 release

April 22, 2021, 0.13.0 release

June 3, 2021, 0.14.0 version released

Arrow-rs (foreign/open source/big data standard)

Key words: big data / data format standards / Apach

arrow-rs [149] is a Rust implementation of Apache Arrow, an in-memory columnar data format standard for heterogeneous big data systems. It has a very big vision: to provide a development platform for in-memory analytics, allowing data to move between heterogeneous Big Data systems and be processed faster.

Arrow introduced Rust starting with version 2.0 [150], and starting with 4.0 the Rust implementation was migrated to a separate repository, arrow-rs.

Arrow’s Rust implementation actually consists of several different projects, including the following independent crates and libraries.

arrow [151], the arrow-rs core library, is included in arrow-rs.

arrow-flight [152], one of the arrow-rs components, is included in arrow-rs.

parquet [153], one of the arrow-rs components, is included in arrow-rs. Within the Big Data ecosystem, Parquet is the most popular file storage format.

DataFusion [154], a scalable in-memory query execution engine that uses Arrow as its format.

Ballista [155], a distributed computing platform, powered by Apache Arrow and DataFusion, included in DataFusion.

Code and team size

arrow-rs Combined, the Rust code volume for the various related components is about 180,000 lines.

Team size.

Lead developer: about 10 people

Community contribution: 550+ people

Project cycle

The project DataFusion was built in 2016 and later entered the Apache Arrow project.

Starting with arrow-rs 4.0.

April 18, 2021, version 4.0 released.

May 18, 2021, version 4.1 released.

May 30, 2021, version 4.2 released.

June 11, 2021, version 4.3 released.

InfluxDB IOx (Foreign/ Open Source/ Time-series Database)

Keyword: temporal database / distributed

InfluxDB IOx [156], the next generation of InfluxDB’s temporal engine, is rewritten using Rust + Aarow.

The existing design has the following main fatal problems.

Inability to solve the problem of timeline inflation

Strict memory management requirements in cloud-native environment means that mmap is not applicable and InfluxDB needs to support a local-site free mode of operation

The separation of index and data storage makes it difficult to implement efficient data import and export functions

These three issues are the core of the existing design, so rewriting is a better choice to support the current requirements.

Code and Team Size

InfluxDB IOx code volume is about 160,000 lines of Rust code.

Team size.

Main developer: 5 people

Community contribution: 24 people

Project Cycle

The project started in November 2019, but as of today it is still very early, it is not ready for testing, and there are no builds or documentation.

However, the GitHub activity status shows that development is still very active. Major development is scheduled to begin in 2021.

CeresDB (Domestic/Commercial/Time Series Database)

Keyword: temporal database


CeresDB is a TP/AP fusion timing database developed by Ant Group to meet the needs of storage, multi-dimensional query drill-down and real-time analysis of massive timing data in financial timing, monitoring, IOT and other scenarios. There is an open source plan, but it is not open source at the moment.

Team size

About 8-10 people are currently working on database development.

Other information is not yet known.

tantivy (foreign / open source / full-text search)

Key words: full-text search / lucene

tantivy [157] is a full-text search engine library inspired by Apache Lucene and implemented in Rust.

tantivy has excellent performance, and here is an application built on Rust + Tantivy + AWS : providing a billion web searches and generating common word clouds [158].

Code and team size

The code size is about 50,000 lines of Rust code.

Team size.

Lead developer: 1 person

Community contribution: 85 people

Project cycle

Project since 2016, iteration cycle for an average of January a small version release, currently released to 0.15.2 version.

Rucene (domestic/open source/search engine)

Key words: Zhihu/ lucene


Rucene[159] is an open source Rust-based search engine implemented by the Zhihu team.Rucene is not a complete application, but a code base and API that can easily be used to add full text search functionality to an application.It is a Rust port of the Apache Lucene 6.2.1 project.

Code and team size

The code size is about 100,000 lines of Rust code.

Team size.

Lead developer: 4 people

Community contribution: 0 people

Project cycle

Probably because it is an internal project open-sourced by the company, no specific semantic version has been iterated. It is used in production environments within Zhihu.

Cloud Native
Cloud native areas include: confidential computing, Serverless, distributed computing platforms, containers, WebAssembly, operations and maintenance tools, etc.

StratoVirt (domestic/open source/container)

Key words: containers / virtualization / Serverless

StratoVirt [160] is a next-generation Rust-based virtualization platform developed by Huawei OpenEuler team.

Strato, taken from stratosphere, means the stratosphere in the earth’s atmosphere, which can protect the earth from the external environment, and the stratosphere is the most stable layer in the atmosphere; similarly, virtualization technology is the isolation layer above the operating system platform, which can protect the operating system platform from the damage of the upper layer of malicious applications, but also provide a stable and reliable running environment for normal applications. The name Strato means a thin and light protective layer to protect the smooth operation of business on the openEuler platform. Strato also carries the vision and future of the project: lightweight, flexible, secure and complete protection.

StratoVirt is an enterprise-class virtualization platform for cloud data centers in the computing industry, realizing a set of architecture to unify support for three scenarios: virtual machines, containers, and Serverless, with key technical competitive advantages in light weight and low noise, software and hard collaboration, and security, etc. StratoVirt has reserved the ability and interface for component-based collocation in the architecture design and interface. StratoVirt can flexibly assemble advanced features on demand until it evolves to support standard virtualization, finding the best balance between feature requirements, application scenarios and lightness and agility.

Code and Team Size

Code volume is approximately 27,000 lines of Rust code.

Team size.

Lead developer: 4 people.

Community contribution: 15 people.

Project cycle

2020-09-23, release 0.1.0.

2021-03-25, Release 0.2.0.

2021-05-28, Release 0.3.0.

Firecracker (Foreign/Product)

Key words: containers / Serverless / FaaS

Firecracker[161] is published by AWS and open sourced by firecracker, which is positioned for Serverless computing business scenarios. firecracker is essentially a lightweight microVM based on KVM, which can support both multi-tenant containers and FaaS scenarios. security and Fast is the primary design goal of firecracker. Its design philosophy can be summarized as follows


Lean set of devices (Minimalism)

Based on Rust language (Builtin Safety)

Customized guest kernel (fast boot)

Optimized memory overhead (using musl c)

Firecracker uses an extremely lean device model (only a few key simulated devices), with the aim of reducing the attack surface and improving security. Firecracker uses a lean kernel (based on Apline Linux), which allows Firecracker to pull up a virtual machine in 125ms. firecracker uses musl libc instead of gnu libc and is able to reduce the minimum memory overhead of the virtual machine to 5MB.

Code and Team Size

The amount of code is about 75000+ lines.

Team size.

Main developer: 7 people

Community contribution: 140 people

Project Cycle

Since the release of 0.1.0 on March 5, 2018, we have basically been releasing a small version every month.

As of last month, version 0.24.0 has just been released.

Krustlet (foreign/product)

Key words: Kubernetes/ WebAssembly/ containers


Microsoft Deis Labs [162] has released Krustlet [163], a Kubernetes kubelet implemented using Rust. It listens to the Kubernetes API for new Pod requests (to run WASI-based applications in a cluster), as long as the request event matches the node selector is a match. Thus, to run applications on Krustlet nodes, users can use taints, tolerations, and node selectors. In addition, the user must generate the WebAssembly binary for the application. This is done using clang [164] if the application is developed in C, or using cargo [165] if the application is developed in Rust. The user must then package the application using wasm-to-oci [166] and push the container image to the container registry. To deploy the application, the user needs to define a Kubernetes manifest that contains tolerations.

The project is not yet at 1.0 and has a lot of experimental features, but its existence is a testament to where WebAssembly is headed in terms of containers. But now that Microsoft has joined the Bytecode Consortium, the project will also work together with other members of the Bytecode Consortium to develop WebAssembly, especially for the upcoming WASI specification work and module linking.

Code and Team Size

The amount of code is about 21,000+ lines.

Team size.

Lead developer: 7 people

Community Contribution: 32 people

Project Cycle

Since April 7, 2020, when 0.1.0 was released, a new version has been released about every month or two, and is currently up to version 0.7.0.

The team has plans to reach version 1.0 in the next few months.

linkerd2-proxy (foreign/product)

Key words: service grid / k8s


Linkerd is the originator of Service Grid, but because Linkerd-proxy requires Java Virtual Machine support to run, it is at a total disadvantage compared to its challenger Envoy, which was released half a year after Linkerd, in terms of startup time, warm-up, memory consumption and so on. Linkerd2 was later rewritten.

Linkerd2 (once named Conduit [167]) is a next-generation lightweight service grid framework from Buoyant. Unlike linkerd, it is dedicated to Kubernetes clusters and is more lightweight than linkerd (based on Rust and Go, without the memory overhead of JVM, etc.), and can run proxy services alongside the actual service pods in a sidecar fashion (similar to Istio similar to Istio).

linkerd2-proxy [168] is the underlying proxy in Linkerd2. The proxy is arguably the most critical component of the service grid. It scales with the deployment of the application, so low additional latency and low resource consumption are crucial. It is also where all the sensitive data of the application is handled, so security is critical. If the agent is slow, bloated, or insecure, so is the service grid. Rewritten with Rust, Linkerd2-proxy [169] is now as good as Envoy in terms of performance and resource consumption.

Rust is the only choice for Linkerd2-proxy. It provides lightning-fast performance, predictably low latency, and the security properties we know service grid proxies need. It also offers modern language features such as pattern matching and an expressive static type system, as well as tools such as a built-in testing framework and package manager that make programming in it a pleasure.

Linkerd2-proxy is built on top of the Rust asynchronous ecosystem, using frameworks and libraries such as Tokio [170], Hyper [171], and Tower [172].

Code and team size

The code size is about 43000+ lines.

Team size.

Main developer: 3 people.

Community contribution: 37 people.

Project cycle

The project is currently at version V2.148.0. The release cycle is about one minor release per week.

Lucet (Foreign/Product)

Keyword:Faas/ Serverless/ WebAssembly / Compiler

Lucet[173] is a native Webassembly compiler and runtime. It is designed to securely execute untrusted WebasseMbly programs within your application. Developed by Fastly, a sub-project of the Bytecode Consortium, Fastly recruited the WebAssembly Server side team from Mozilla in 2020, and the lucet team has now merged with the wasmtime[174] team.

Fastly’s large CDN business gave rise to the idea of moving into edge computing, and they are emerging as one of the most competitive and engaged leaders.

Regarding edge computing, another head company is Cloudflare (NET.US). From a technical perspective, Fastly and Cloudflare have adopted two different approaches in their serverless edge computing solutions.

Cloudflare chose to build their solution on the Chromium V8 engine. This allowed them to leverage the work already done by the Google (GOOG.US) Chrome team to bring their edge computing products to market quickly as early as 2018.

This was a major improvement over the serverless solutions offered by cloud providers at the time, such as Amazon (AMZN.US) Lambda. cloudflare workers reduced cold start times by a factor of 100, moving into the millisecond stage. And reduced memory usage by a factor of 10, allowing for more efficient use of hardware resources.

But instead of relying on existing technologies for serverless computing, such as reusable containers or the V8 engine, Fastly decided to go all in on WebAssembly and built its own Lucet compiler and runtime, optimized for performance, security, and compactness.

Fastly has been doing this work behind the scenes since 2017, and it provides a solid foundation for the Compute@Edge product line, a platform that now runs production code for multiple customers.

Lucet compiles WebAssembly into fast, efficient binaries for execution, and also enhances security with memory allocation and no residuals from previous requests.Lucet also includes a tightly optimized, simplified runtime environment, on which the Fastly team spent most of its development time. The result was better performance than the V8 engine.

Fastly cold start times are well into the microsecond range – officially claimed to be 35 microseconds. This is at least 100 times faster than the V8 engine, which takes 3-5 milliseconds to start up (3,000 to 5,000 microseconds).

Again, since Lucet contains only the code modules needed to run the compiled assembly code, it requires only a few kilobytes of memory. This is about one thousandth of the 3MB used by the V8 engine.

Code and Team Size

Lucet has over 29,000 lines of code and wasmtime has over 270,000 lines of code in total.

Team size.

Main development: 16 people.

Community contribution: more than 200 people (wasmtime contributors mostly)

Project cycle

lucet is currently in maintenance phase, wasmtime is refactoring at high speed.

Evaluation iteration cycle for every month to release a small version.

wasmcloud (foreign/open source/product)

Key words: WebAssembly/ Distributed Computing


WasmCloud [175] runtime can be used in cloud, browser and embedded scenarios. wasmcloud is a WebAssembly-based distributed computing platform . It is relatively innovative in that it develops a waPC standard for secure procedure calls for Guest and Host to address the imperfections of current features such as WASI.

Code and team size

The code volume is about 11000 lines of Rust code.

Team size.

Main developer: 2 people.

Community contribution: 11 people.

Project Cycle

The project started on February 17, 2021, and the iteration cycle is about one small version every two weeks.

Habitat (Foreign/Open Source/Operations and Maintenance Tools)

Key words: Chef / DevOps / Operations tools


Habitat [176] enables application teams to build, deploy, and manage any application in any environment, whether it is a traditional data center or containerized microservices.

“Lift & Shift” legacy applications to modern platforms. Migrating existing, business-critical applications to modern platforms is a pain point for many organizations.

Deliver applications through a cloud-native (cloud, container) strategy. Many organizations are hindered in moving to and deploying cloud-native platforms.


Habitat makes automation easy by building the management interface and applications together.

Habitat Operator: Get one Kubernetes Operator for all your applications, without the need for a special Operator for each application.

Whether your applications are on Kubernetes or not, Habitat’s Open Service Broker allows them to coexist through Kubernetes’ native interface.

Code and Team Size

Code volume is approximately 74,000 lines of Rust code.

Team size.

Lead developer: 5 people.

Community Contribution: 140 people

Project Cycle

The iteration cycle is one minor release per week, and the current version is 1.6.342.

Operating Systems
The operating system area includes various operating systems implemented with Rust.

Rust for Linux (foreign / Rust into Linux support project)

Keyword: Linux


The Rust for Linux[177] project aims to promote Rust as a second programming language for the Linux kernel.

The Linux kernel is at the heart of the modern Internet, from servers to client devices. It is on the front line of processing network data and other forms of input. As such, vulnerabilities in the Linux kernel can have widespread impact, putting the security and privacy of people, organizations, and devices at risk. Because it is written primarily in C, which is not memory-safe, memory security vulnerabilities, such as buffer overflows and use-after-assignment, are an ongoing problem. By making it possible to write parts of the Linux kernel in the Rust language, which is memory safe, we can completely eliminate memory security vulnerabilities in certain components, such as drivers.

Current progress: Google sponsored and organized by ISRG hired Miguel Ojeda (core developer) to work full time on Rust for Linux and other security work for one year. It is hoped that by having him work on this full-time, he can do his part to support digital infrastructure.

Team Size

Core development: 1 to 6 people.

No other information is available at this time.


Keyword:GNU/ Shell/ Rust for Linux


Coreutils[178] is a Rust implementation of the core utility for the GNU Shell.

Code and team size

The code volume is about 77,000 lines of Rust code.

Team size.

Main developer: 8 people

Community contribution: 250 people

Project cycle

The project started in late 2020, with an average iteration cycle of one minor release per month, and is currently at version 0.0.6. Current state, enough to boot Debian systems via GNOME.

Occulum (domestic/open source/TEE library OS)

Key words: confidential computing / trustworthy computing / TEE / library operating system

Occulum [179] is an open source TEE operating system from Ant, and the first open source project initiated by a Chinese company in the CCC Confidential Computing Consortium.

Occlum provides POSIX programming interface, supports many mainstream languages (C/C++, Java, Python, Go, Rust, etc.), and supports many secure file systems. Occlum not only has been widely used in industrial scenarios, but has also published academic papers in the system capstone ASPLOS 2020 which represents the leading edge of the classified computing industry.

Architecturally, Occlum provides not only basic Linux-like operating system capabilities, but also a Docker-like user interface, such as Occlum build and Occlum run, which are similar to docker commands.

Code and team size.

Occulum has about 28,000+ lines of code.

Team size.

Main developer: 5 people.

Community contribution: 22 people.

Project Cycle

The iteration cycle is one new release every six weeks.

rCore and zCore (Domestic/ Education/ Academic/ Open Source/ Operating Systems)

Key words: Tsinghua University/ rCore/ zCore/ operating system/ teaching


rCore[180] is a Linux kernel reimplemented in Rust, born in 2018, and currently piloted in the operating system teaching experiment of Tsinghua Computer Science Department.

zCore[181] is a microkernel of Zircon (the microkernel of Google Fuchsia OS) reimplemented in Rust. It runs in the kernel state and provides exactly the same system calls as Zircon externally, so it can run native Fuchsia user programs. Not only that, but it can also run as a normal user process in the user state of Linux or macOS, a mode we generally refer to as LibOS or User-Mode OS. you don’t even need to install the QEMU emulator, just load the official Rust toolchain and compile and run zCore!

Some related learning resources.

The next generation of Rust OS: zCore is officially released [182].

Writing an OS with Rust | Introduction to the Tsinghua rCore OS Tutorial[183]

Code and team size

rCore code volume is about 26000 lines of Rust code, zCore code volume is about 27000 lines of Rust code.

Team size.

Main developer: 3 to 5 people

Community contribution: about 30 people

Project cycle

Both projects are in maintenance phase, no external releases.

Redox (foreign/ open source/ operating system)

Keyword: operating system


Redox is a UNIX-like operating system written in the Rust [184] language , and its goal is to bring the innovation of the Rust language to a modern microkernel and a full range of applications. the company behind Redox is supposed to be System 76. The main project is hosted in GitLab.

Code and Team Development

The code base is currently about 1.34 million lines of Rust code, making it a heavyweight project in the Rust ecosystem.

Team size.

Main developer: 21 people

Community Contribution: 79 people.

Project cycle

Redox started in 2016, until 2017, version 0.3, released a small version each year, to this year has been released to version 0.5.

tockOS (foreign/open source/embedded real-time operating system)

Key words: embedded operating system / real-time


Tock [185] is an embedded operating system designed to run multiple concurrent, mutually untrusted applications on Cortex-M and RISC-V based embedded platforms. tock is designed with protection in mind, both against potentially malicious applications and device drivers. tock uses two mechanisms to protect the operating system’s different components of the operating system. First, the kernel and device drivers are written in Rust, a system programming language that provides compile-time memory safety, type safety, and strict aliasing. tock uses Rust to protect the kernel (such as the scheduler and hardware abstraction layer) from platform-specific device drivers, and to isolate device drivers from each other. Second, Tock uses memory protection units to isolate applications from each other and from the kernel.

OpenSK [186] is an open source implementation of a security key written in Rust that supports both the FIDO U2F and FIDO2 standards.

Code and team size

The code volume is about 150,000 lines of Rust code.

Team size.

Main development: 4 people.

Community contribution: 123 people.

Project Cycle

The project is currently in the maintenance phase.

The current version 1.6 release, the previous iteration cycle is about every six months to release a small version.

Theseus (foreign/open source/high-end embedded operating system/research project)

Key words: embedded operating system/research


Theseus [187] is the result of years of experiments at Rice University, USA, and also with the participation of other universities, such as Yale University. It redesigned and improved the modularity of the operating system by reducing the state held by one component over another, and by using a secure programming language, Rust, to transfer as much of the operating system responsibility as possible to the compiler.

Theseus embodies two main contributions.

An operating system architecture. One in which many tiny components have well-defined, run-time persistent bounds, and their interactions do not require them to hold state with each other.

An internal language approach that uses language-level mechanisms to implement the operating system itself. This allows the compiler to enforce invariants on the semantics of the operating system.

Theseus’ architecture, internal language design, and state management enable real-time evolution and fault recovery of core operating system components in a way that goes beyond existing works.

For more information: Theseus: an Experiment in Operating System Structure and State Management [188]

Code and Team Size

Code volume is approximately 56,000 lines of code.

Team size.

Main developer: 1 person.

Community contribution: 17 people.

Project Cycle

The project started in March 2017 and is currently in the maintenance phase.

Tools and Software
The tools and software include some command line tools, desktop software, etc. implemented using Rust.

RustDesk (domestic/partially open source/remote desktop software)

rustdesk [189], is a remote desktop software that works out of the box without any configuration, replacing TeamViewer and AnyDesk. you are in full control of your data without worrying about security. rustdesk is a commercial open source software, 90% open source.

Code and Team Size

The code size is about 35000 lines of Rust code.

Team size.

Main developer: 1 person.

Community contribution: 8 people.

Project cycle

Version 1.1 was released on March 27, 2021, and no previous iterations are known.

Basically one or two minor iterations per month since then.

spotify-tui (foreign/terminal music software)

Keyword:Terminal UI/ Spotify


spotify-tui[190] is a terminal Spotify music client, based on the Rust terminal UI development framework Tui-rs[191].

Code and team size

The code size is about 12,000 lines of Rust code.

Team size.

Main developer: 1 person.

Community contribution: 84 people.

Development cycle

Already in maintenance phase, averaging one minor release per month.

Ripgrep (foreign / terminal text search)

Key words: text processing / terminal tools


ripgrep[192] is a row-based search tool that searches recursively through a specified directory based on the pattern provided. It is written in Rust and is unparalleled in its speed compared to similar tools. ripgrep is now the fastest text search tool for Linux.

Code and team size

Code size is about 35,000 lines of Rust code.

Team size.

Main developer: 1 person.

Community contribution: 287 people.

Project cycle

The project started in 2016, iterated more frequently before 2018, then entered a stable maintenance period, basically a major version a year, the current version is 13.0.0.

nushell (foreign/open source/shell tool)

Keyword: shell


NuShell [193] is a modern shell program written in Rust language across Unix, Windows, macOS systems.

Unlike traditional Unix shells, NuShell draws inspiration from PowerShell and treats the result of each command as an object with structure, rather than the traditional raw bytes. However, it is much faster than PowerShell.

NuShell features structured data and SQL-like two-dimensional table manipulation, which is advantageous when dealing with large amounts of structured data and is almost equivalent to a SQL parser for local files and data. However, its lack of flow control statements makes it difficult to perform logical system management tasks.

Code and Team Size

The amount of code is about 100,000 lines of Rust code.

Team size.

Main developer: 2 people.

Community contribution: 231 people.

Project Cycle

The project started in May 2019, with an iteration cycle of one minor release per month, currently at version 0.32.0.

alacritty (foreign/open source/simulation terminal)

Key words: emulation terminal/OpenGL


**Alacritty **[194] is a free open source, fast, cross-platform terminal emulator that uses the GPU (graphics processing unit) for rendering, implementing certain optimizations not available in many other terminal emulators [196] for Linux [195].

Alacritty focuses on two goals: simplicity and performance. The performance goal implies that it should be faster than any other available terminal emulator. The simplicity goal means that it does not support features such as tabs or segmentation in Linux (which can be easily provided by other terminal multiplexers – tmux [197]).

The performance is already second to similar tools on Linux.

Code and team size

The code size is about 22,000 lines of Rust code.

Team size.

Main developer: 2 people

Community contribution: 330 people.

Project Cycle

The project was started in 2016, and the current iteration cycle averages a new minor release every three months. The current version number is 0.8.0. It is not yet stable for version 1.0, but it has become a daily development tool for many people.

Gitui (foreign/open source/Git Terminal UI)

Keyword: Git / Terminal UI


Gitui [198] is a fast Git terminal interface.

Code and Team Size

The amount of code is about 29,000 lines of Rust code.

Team Size.

Main development: 1 person.

Community Contribution: 42 people.

Project Cycle

The project was launched on March 15, 2020, with an iteration cycle averaging one minor release every two weeks. Current version 0.16.1.

Other good terminal tools

exa [199], Rust rewrite ls tool.

bottom[200], Rust rewrite of the Top tool.

starship[201] Super fast, minimalist command line prompt with various customizations and support for any shell

bat[202] cat clone with more feature support

delta[203] git, diff output viewer

zoxide[204] faster file system browsing

fd[205] simple, fast, user-friendly alternative to find

tealdeer[206] a crowd-sourced terminal command speed card

Machine learning
The machine learning field includes, Rust-based implementations of machine learning frameworks, scientific computing libraries, and more.

linfa (foreign/open source/machine learning toolkit)

Key words: scikit-learn/ sklearn/ foundation toolkit


Linfa [207] is a Rust implementation of a python scikit-learn-like library designed to provide a comprehensive toolkit for building machine learning applications using Rust. The team has also created the Rust-ML organization.

scikit-learn, also written sklearn, is an open source machine learning toolkit based on the python language. It enables efficient algorithmic applications through libraries for python numerical computation such as NumPy, SciPy and Matplotlib, and covers almost all major machine learning algorithms.

More information: The Rust Book of Machine Learning [208]

Code and team size

The code volume is about 23000 lines of Rust code.

Team size.

Lead developer: 6 people

Community contribution: 12 people

Project cycle

The project was initiated in 2018, but the official start of construction is October 2020, with iteration in 2021. The most recent version was released in April, version 0.4.0. Project development status is still relatively active.

tokenizers (foreign/open-source/natural language processing word splitting library)

Key words: natural language processing / word separation library


tokenizers [209] is a Rust implementation of the Hugging Face open source word splitting library .

Hugging Face is a chatbot startup based in New York, USA. The company is a big name in the NLP community and just closed a $40 million Series B round of funding in March. The open source NLP library Transformers was released on GitHub.

One of the bottlenecks in the modern NLP pipeline based on deep learning is tokenization, especially for general-purpose and framework-independent implementations.

Therefore, the core of the subsystem is written in Rust and Node and Python bindings exist. Provides an implementation of today’s most commonly used splitter, with a focus on performance and generality.


Train new words and tokenize them using today’s most commonly used word splitter (tokenize).

Extremely fast due to the Rust implementation (both training and tokenization). It takes less than 20 seconds to tokenize one GB of text on the server’s CPU.

Easy to use, but also very versatile.

Designed for research and production.

Regularization with alignment tracking. Always gets the part of the original sentence that corresponds to a given token.

Does all the pre-processing. Truncate, fill, add special tokens needed by your model.

Code and Team Size

The code volume is about 28,000 lines of Rust code. That’s 68% of the total project, with some Python code in the project.

Project cycle

The project started in October 2019, with an average iteration cycle of one minor release per month, and the current version is Python V0.10.3.

tch-rs (foreign/open source/PyTorch bindings)

Key words: PyTorch/ cpp api bindings


tch-rs [210] is a rusty binding for Pytorch’s Cpp API. tch Crate aims to provide some thin wrappers around the Cpp Pytorch API. It aims to be as close to the original Cpp API as possible. more idiomatic Rust bindings can then be developed on top of this.

Code and team size

The code size is about 58000 lines of Rust code.

Team size.

Lead developer: 1 person

Community contribution: 36 people

Project Cycle

The project started in February 2019 and as of today, no official release has been made. It is still under active maintenance, but may be a personal project.

ndarray (foreign/open source/scientific computing)

Key words: scientific computing / N-dimensional arrays

ndarray [211] is an open source project developed by bluss, a senior scientific computing expert on the official Rust team, that implements rust-based matrices and linear operations. The goal is to build a scientific computing community in Rust similar to numpy and openblas. It is the basis for many types of scientific computing libraries such as machine vision, data mining, bioinformatics, etc. The main users in the community are some universities or research institutes of related technologies.

Currently, Huawei is also deeply involved in the development of this basic library, see Huawei | Analysis and practice of the Rust scientific computing multidimensional array computing library [212].

There is also a library related to linear algebra: ndarray-linalg [213].

Code and team size

The code volume is about 29000 lines of Rust code.

Team size.

Main developer: 1 person

Community contribution: 57 people

Project cycle

The project has been started since November 2015, with an average of 1 to 2 minor releases every six months.

TVM-rs (foreign / open source / TVM rust binding)

Key words: Apache/ TVM


tvm-rs[214] is a Rust binding for TVM.

TVM is a deep learning automatic code generation method proposed by Tianqi Chen, PhD, and others at the University of Washington, which was briefly introduced by Machine Heart last August. The technique automatically generates deployable optimized code for most computational hardware with performance comparable to optimized computational libraries from the best current vendors, and can be adapted to new types of dedicated gas pedal backends.

In simple terms, TVM can be described as a collection of many tools, where these tools can be used in combination to implement some of our neural network acceleration and deployment features. TVM is widely available and can support most of the neural network weighting frameworks on the market (ONNX, TF, Caffe2, etc.) and can be deployed on almost any platform, such as Windows, Linux, Mac, ARM, etc.

Code and team size

The code size is about 10,000+ lines.

Team size.

Main developer: 3 people

Community contribution: 7 to 10 people

Project Cycle

Maintenance from time to time.

Neuronika (foreign / open source / machine learning framework)

Key words: PyTorch / machine learning framework


Neuronika [215] is a machine learning framework written in Rust, similar to PyTorch, which now implements the most common layer components (dense layer, dropout layer, etc.) at speeds comparable to PyTorch. it is built with a focus on ease of use, rapid prototyping, and efficient performance.

Neuronika was developed by Francesco Iannelli et al, who are now master students in computer science. The framework provides automatic differentiation and dynamic neural networks, much like Pytorch. the most common layer components such as dense layers, dropout layers, GRU, LSTM and 1d-2d-3d CNNs are currently implemented, however, pooling layers are missing, etc. Neuronika also provides loss functions, optimizers, computational graphs, tensor and data utilities.

In terms of speed, the project authors say that Neuronika’s performance is comparable to PyTorch. You can benchmark it. For benchmarking, you can refer to the test documentation.

The core mechanism of Neuronika is a mechanism called reverse-mode automatic differentiation, which allows users to easily implement dynamic neural networks without any overhead when changing them and can be run through the API.

Code and team size

The code size is about 26,000 lines of Rust code.

Team size.

Lead developer: 2 people

Community contribution: 0 people

New project, no one has contributed yet.

Project cycle

No initial version has been iterated yet, but development dynamics are relatively active.


TensorFlow-rs [216], TensorFlow Rust binding, 2021 maintenance status, less active.

Whatlang [217], a natural language recognition project based on a Rust implementation.

Awesome-Rust-MachineLearning [218], list of Rust machine learning related ecological projects.

Games domain includes, games made with Rust, Rust game engine, Rust game ecology building, etc.

veloren (foreign/sandbox game/open source)

Key words: sandbox games / my world


Veloren[219] is an open source free multiplayer voxel RPG developed in Rust, inspired by games such as Cube World, The Legend of Zelda: Breath of the Wild, Dwarf Fortress and My World. It supports multiplayer and single player, and can be played on Windows, Mac, and Linux. Click on the official website [220] to learn more.

Veloren is supposed to be the first project to use Rust, which started developing with Rust in 2013, before Rust was 1.0. As of today, 2021, the project is still actively updated.

Veloren’s founder is also a member of the official Rust game development team.

Code and team size

Code volume is about 200,000 lines of Rust code.

Team size.

Lead developer: 15 people.

Community contribution: 175 people.

Project Cycle

The project iteration cycle averages one minor release every three months. Currently at version 0.10.0.

A / B Street (foreign / open source / streetscape traffic exploration game)


A/B Street [221] is a game that explores small changes in the movement of the city for drivers, cyclists, transit users and pedestrians.

The ultimate goal of the game is to allow the player to be the real proponent of tweaking Seattle’s (default) infrastructure, A/B Street uses OpenStreetMap[222] , so the game can be anywhere in the world. a/B Street is of course a game that uses a simplified traffic modeling approach, so city governments must still use existing methods to evaluate proposals. a / B Street is intended as a conversation starter and tool to communicate ideas with interactive visualizations or as a reference for urban planning experts.

Code volume and team size

The code volume is approximately 100,000 lines of Rust code.

Team size.

Lead developer: 1 person

Community Contribution: 24 people

Project Cycle

The project started on March 11, 2018 and has been iterating at a high rate until the current June 2021. The average iteration cycle is one minor release per week.

Embark Corporation and Rust Game Ecosystem

Keyword: Rust Game Ecosystem


Embark is a game studio founded by Johan Andersson (a well-known figure in the gaming industry), who chose Rust as the main language at the beginning.

We believe that by openly sharing our work, problems and ideas with the community, we will create more opportunities for collaboration and discussion that will lead us to a great future for Rust and the gaming industry as a whole. — Johan Andersson (@repi), CTO, Embark

At Embark, we’ve been building our own game engine from scratch using Rust. We have prior experience in in-house development of RLSL prototypes, and we have a team of talented rendering engineers who are familiar with the problems of today’s shader languages in games, game engines, and other industries. As a result, we feel we are in a unique position to solve this problem.

We want to simplify our own internal development with a great language, build an open source graphics community and ecosystem, facilitate code sharing between GPUs and CPUs, and most importantly – empower our (future) users and other developers to create compelling experiences more quickly.

Creating a Rust game ecosystem is not just a slogan; Embark has joined the Rust Game Working Group and created a series of libraries to build the Rust game ecosystem.

These libraries are listed in the rust-ecosystem[223] repository.

Name Description

ash-molten[224] Statically linked MoltenVK for Vulkan on Mac using Ash

buildkite-jobify[225] Kubekite, but in Rust, using configuration from your repos

cargo-about[226] Cargo plugin to generate list of all licenses for a crate

cargo-deny[227] Cargo plugin to help you manage large dependency graphs

cargo-fetcher[228] cargo fetch alternative for use in CI or other “clean” environments

cfg-expr[229] A parser and evaluator for Rust cfg() expressions⛴️ discord-sdk[230] An open implementation of the Discord Game SDK in Rust

gsutil[231] A small, incomplete replacement for the official gsutil???? krates[232] Creates graphs of crates from cargo metadata

octobors[233] GitHub action for automerging PRs based on a few rules

physx[234] Use NVIDIA PhysX[235] in Rust

puffin[236] Simple instrumentation profiler for Rust

relnotes[237] Automatic GitHub release notes

rpmalloc-rs[238] Cross-platform Rust global memory allocator using rpmalloc[239]

rust-gpu[240] Making Rust a first-class language & ecosystem for GPU code

spdx[241] Helper crate for SPDX expressions

spirv-tools-rs[242] An unofficial wrapper for SPIR-V Tools

superluminal-perf[243] Superluminal Performance[244] profiler integration

tame-gcs[245] Google Cloud Storage functions that follows the sans-io approach

tame-oauth[246] Small OAuth crate that follows the sans-io approach

tame-oidc[247] Small OIDC crate that follows the sans-io approach

texture-synthesis[248] Example-based texture synthesis generator and CLI example

tryhard[249] Easily retry futures

One of the most important libraries is rust-gpu[250], designed to make Rust a first-class language and ecosystem for building GPU code.

The desire to use Rust Writing programs for GPUs stems not only from security features and high performance, but also from the need to acquire modern tools for use with packages and modules to make the development process more efficient.

Historically, in games, GPU programming was done by writing HLSL or, to a lesser extent, GLSL. these were simple programming languages that evolved over the years along with rendering APIs.

However, as game engines have evolved, these languages have not provided mechanisms to handle large codebases and, in general, they have lagged behind compared to other programming languages.

While there are generally better alternatives to both languages, neither can replace HLSL or GLSL.

Is this because they are blocked by the provider or because they are not supported with traditional graphics pipelines. Such examples include CUDA and OpenCL. Although attempts have been made to create languages in this space, none have gained significant traction in the gamedev community.

Rust GPU continues to develop ideas based on the RLSL project, in which attempts were made to create a Rust compiler for the SPIR-V generic shader middleware, which is proposed in the Vulkan API and is supported in OpenGL 4.6. In the current development phase, Rust GPU already allows you to run simple graphics shaders and compile important parts of the basic Rust standard library. At the same time, the project is not yet ready for widespread use; for example, the shaders do not yet support loops.

Code based on the Rust language forms the representation of the SPIR-V shader, for which a special backend for the Rust compiler was developed, a class of backends similar to the Cranelift code generator WebAssembly used to compile into the representation.

The current approach is to support the Vulkan graphics API and SPIR-V views, but it is planned to use the generator in future DXIL (DirectX) and WGSL (WebGPU) shader views. Based on Cargo and crates.io, tools are being developed to develop and distribute packages with shaders in SPIR-V format.

It is still very early days, iterating at an average of one small version per week, and now releasing version 0.3. The main developer is about six people and the community contributes 35 people.

Bevy (foreign/game engine/open source)

Key words: game engine / ECS


Bevy is a Rust-based implementation of a data-driven game engine.

Bevy complete practice of the more popular data-driven development concept, that is, the ECS model. Compared with other older open source engines, such as Godot, Bevy has a complete set of ECS development model from building the wheel to game development on the ground. Compared with commercial engines, Bevy has very little historical baggage, and does not need to be compatible with the traditional GameObject model, like unity’s DOTS development. In addition, thanks to the powerful expressiveness of the Rust language, the whole engine looks much cleaner and clearer than the data-driven wheels built in C++. — “Enabling a Rust Game Engine: Bevy — ECS Part” [251]

Compared to other game engines implemented in Rust, such as Amethyst, Bevy is a latecomer to the game, with a unique API design that takes full advantage of Rust language features to make it very easy for developers to get started. Thanks to its Plugin mechanism, Bevy has gradually formed its own ecology, and many Bevy-based Plugins have emerged.

Code and Team Model

The code volume is about 65000 lines of Rust code.

Team size.

Main developer: 1 person

Community contribution: 244 people

Project Cycle

The project started on November 10, 2019 and is currently releasing version 0.5, which is still in a high speed iterator. the Bevy project has also undergone a major refactoring, refactoring the underlying ECS system.

Other developments

https://gamedev.rs/%5B252%5D is the official site of the Rust Games Working Group, which regularly publishes news about Rust in the games ecosystem.

Client Development
Flying Book App (Domestic/Commercial)

Key words: lark/ Byte Jumping


Fishu (lark) App, a subsidiary of Bytespring, is supposed to be part of the largest Rust development team in China, with about 30-60 full-time Rust developers.

Fishu uses Rust in its client-side cross-platform components, and the code volume is said to be over 550,000 lines of code (including testing and code generation).

Other information is not available at this time.

The Fishu team has also open sourced several Rust projects, which can be found in their GitHub repository [253].

Blockchain/Digital Currency
The blockchain/digital currency area includes, blockchain infrastructure, digital currency projects, etc.

Diem (Foreign/Open Source/Libra/ Supersovereign Currency)

Key words: libra/ Facebook


Diem [254] formerly known as Libra, has a mission to build a simple, borderless set of currencies and a financial infrastructure that serves billions of people. They strive to build a new decentralized blockchain, a low-volatility cryptocurrency, and a smart contract platform with plans to open up new opportunities for responsible financial services innovation.

They believe that more people should have access to financial services and cheap capital, and that everyone has the inherent right to control the fruits of their legitimate labor. They believe that open, immediate and low-cost global money flows will create enormous economic opportunities and business value for the world, and are convinced that people will increasingly trust decentralized forms of governance. The global monetary and financial infrastructure should be designed and managed as a public good. Everyone has a responsibility to help advance financial inclusion, support users who adhere to online ethics, and continuously maintain the integrity of this ecosystem.

Status Update: Facebook’s Digital Currency Project Diem Drops Application for Swiss Payment License: Focuses on U.S. Market (May 13, 2021)

Code and Team Size

Code volume is about 300,000 lines of Rust code.

Team size.

Main development: 5 to 20 people

Community contribution: 150 people

Project cycle

Average of one small version per month, the current main framework version 1.2.0, sdk version 0.0.2.

Substrate (foreign / open source / blockchain framework)

Keyword: parity/substrate


Substrate[255] is a project of Parity Polkadot. polkadot is built on Substrate.

Substrate framework is known as the next generation blockchain framework, similar to Java’s Spring, Python’s Django, except that the latter creates websites and the former creates blockchains. It is developed by the Parity team based on the Rust language and is an out-of-the-box blockchain constructor. Creating a blockchain based on Substrate allows developers to focus on business requirements without implementing the underlying P2P network and consensus logic from scratch.

Code and Team Size

Approximate code size is 350,000 lines of Rust code.

Team size.

Main development: 4 to 10 people

Community contribution: 243 people

Project Cycle

Substrate has evolved to V3.0 after two major iterations. The current iteration cycle averages one minor version per month.

Nervos CKB (Domestic / Blockchain Public Chain)

Keyword:nervos/ ckb/ cita


Nervos Network is an open source public chain ecosystem containing a set of layered protocols with blockchain technology as the core and compatible with each other to solve the blockchain scalability dilemma.

Nervos CKB [256] (Common Knowledge Base) is a license-free public blockchain, and in blockchain, the common knowledge base referred to here usually refers to a state that has been verified and validated by global nodes together. Similar to Bitcoin, Nervos CKB is a state verification system. It is developed by Hangzhou Secret Monkey Technology.

The company is one of the largest Rust developers in China, with about 30+ full-time Rust developers.

Code volume and team size

Single ckb project code volume, about 110,000 lines of Rust code.

Team size.

Main developer: 6 to 8 people (single ckb project)

Community contribution: 22 people

Project cycle

Development cycle is about one small version per week.

Other blockchain projects



Other areas where Rust is revolutionizing

zenoh[259] , zenoh integrates dynamic data, in-use data, and static data with computation. It cleverly blends traditional publish/subscribe with geographically dispersed storage, query, and computation, while retaining time and space efficiencies far superior to any mainstream stack. zenoh can be used as a ROS2 middleware DDS replacement, or can be seamlessly integrated with DDS.


New Zealand company Rocket Lab [260] is a global leader in small satellite launches, and partners with NASA and xSpace. The team has 500 people and is growing every week. Currently using Rust.

The unofficial aerospace working group AeroRust [261] created the Are we in space yet?[262] website to track Rust’s open source projects in aerospace.

Automotive/Autonomous Driving.

erdos [263] for building data flow systems for self-driving cars and robotics applications.

Programming language.

Rust [264], the Rust language has long been bootstrapped and is considered one of the largest Rust projects in the world. Its code volume is about 1.79 million lines of Rust code.

langs-in-rust[265] This site lists dozens of new programming languages implemented in Rust. There is no shortage of good work. For example, Gleam[266] / Dyon[267] / Koto[268]


nannou[269], designed to let artists create their own art. In Germany mindbuffer[270] is a company that creates physical art projects based on nannou and koto: using 486 stepper motors, 86,000 LEDs and a 5-channel granular synthesis engine to create electronic artworks that can change shape in brilliant colors[271].

VR domain.

makepad [272], a VR, Web and native rendering UI framework and IDE, based on Rust and WebAssembly (WebGL) technologies. The author is the founder of the Cloud9 IDE. The project also includes a white paper [273] that describes its vision.

Inventory of companies using Rust in production environments

The visionary journey of trusted programming is just beginning. We want to work with the Rust community, and the soon-to-be established Rust Foundation, to bring a smooth revolution to the telecom software industry.

Huawei joined the Rust Foundation in 2021 with the aim of contributing to the global rollout and development of Rust. It currently uses Rust in a number of open source projects and internal pilots, as well as making some contributions to the Rust ecosystem. We are currently preparing for large-scale use of Rust.

Huawei is also a strategic level sponsor of Rust Conf China 2020.

PingCAP and its Customers

PingCAP created TiDB, a distributed database, and the underlying TiKV distributed storage is based on Rust.

TiDB is now used in real production environments by 1500 leading companies in different industries [274]. Customers include: China Mobile/ China Unicom Express/ Companion Fish/ Zhihu/ Netease Games/ Meituan/ Jingdong Cloud/ 360 Cloud/ Headline Today and many more.

PingCAP is also a silver sponsor of Rust Conf China 2020.

Ali/Ant Group

Rust is used by the Ali Cloud/ Nail team, as well as the Ant Group Confidential Computing and Database teams.


The Bytespring Fishu team uses Rust for cross-platform client component development.

Bytespring/Feishu is also a Rust Conf China 2020 Diamond Sponsor.


The Zhihu search engine team uses Rust.

Collect Money

The Shanghai Money Bar team uses Rust for message queuing services in a production environment.

Geely Group

Geely Group’s digital technology segment is using Rust to build a blockchain.

Shanghai Xwei Information Technology

Ltd. specializes in the development and production of aerospace and aviation training equipment, and is a technology company invested by top Chinese funds, mainly serving Chinese aerospace, military and airlines. The company uses Rust for some of its products.

Hangzhou Secret Monkey Technology

The CBK public chain project is a product of this company. It also has a sister company, Xita, which is also a production level user of Rust.

Secret Ape and Xita are both gold sponsors of Rust China Conf 2020.

Other domestic blockchain companies


Bitfrost is also a blockchain project that provides a liquid cross-chain network for Staking, and a silver sponsor of Rust China Conf 2020.

Darwin Network

Darwin Network is a decentralized bridge network developed based on Substrate, also in the blockchain industry. silver sponsor of Rust China Conf 2020.

There are many other blockchain companies that are using Rust, too many to list here.


Douban uses Vector, a Rust open source library, which should be a passive use of Rust. Whether other teams are using Rust in other projects is unknown.


Google Fuchsia OS uses about 1.37 million lines of Rust code, making it the second largest Rust project in the Rust ecosystem, in addition to Rust.

Google is also a strong supporter of the Rust for Linux project and has funded the core development.

Google is also a member of the Rust Foundation.

Android [275]: “For the past 18 months, we’ve been adding support for Rust to the Android open source project. We have several early projects developed with Rust that we will be sharing in the coming months. Extending Rust to more operating systems is a multi-year project for us.”


A member of the Rust Foundation. Rust development is now fully supported by Windwos.

Ever notice how fast VS Code searches are? The reason is that VS Code is using ripgrep[276] to enhance its search capabilities[277].


AWS was probably one of the first companies to support the Rust community, sponsoring the Rust Conf for several years.

At AWS, we like Rust because it helps AWS write high-performance, secure infrastructure-grade networking and other systems software. We use Rust for a number of service offerings, such as Amazon Simple Storage Service (Amazon S3), Amazon Elastic Compute Cloud (Amazon EC2), Amazon CloudFront, Amazon Route 53, and more. We recently launched Bottlerocket, a Linux-based container operating system, also developed in Rust.


A member of the Rust Foundation, Facebook’s internal Rust projects have combined to exceed a million lines of code. Notable projects are Diem and its MOVE language.

Facebook also currently has a team dedicated to contributing to the Rust compiler and libraries.


The birthplace of the Rust language, the Servo browser kernel project is developed in Rust and has about 300,000 lines of code.


In the job announcement, Apple wrote: “The performance and security of the systems we build are critical. We currently use asynchronous I/O and threads to distribute workloads and interact directly with the underlying Linux kernel interface. After our first successful use of Rust, we are migrating our established code base from C to Rust and plan to build new features primarily using Rust in the future.”

Apple is not currently a member of the Rust Foundation.


We’ve been using Rust for a few years now in our production environment at 1Password. Our Windows team is a leader in this work, and about 70 percent of 1Password 7 used in Windows is developed in Rust. In late 2019, we also ported 1Password Brain, the engine that drives the browser’s populating logic, from Go to Rust so that we can take advantage of the performance benefits of deploying Rust programs to WebAssembly applications in browser extensions.


As our Rust development experience has grown, the Rust language has shown strengths in two other areas: as a strong memory-safe language, it is a great choice for edge computing; as a language with great enthusiasm, it has become a popular language for component redevelopment (de novo).


When starting a new project or component, we first consider using Rust, but only where appropriate. In addition to performance, Rust has many advantages for engineering teams. For example, its type safety and borrow/reference checker make it very easy to refactor code. In addition, the ecosystem and tools for Rust are excellent and there is a huge amount of momentum behind it.


A team at IBM used WebAssembly and Rust to achieve incredible performance improvements.


The ergonomics and right principles of Rust not only helped us tame the complexity of sync. Moreover, we can code complex invariants in the type system and let the compiler check them for us.


npm’s first Rust program, in a year and a half production environment, had no alerts. ‘The biggest compliment I can give Rust is that it’s boring’, says Dickinson, ‘which is an amazing compliment’. The process of deploying the new Rust service was straightforward, and soon they were able to forget about this Rust service because it caused very few operational problems.


Just this month, we crossed the threshold of sending 7 billion notifications per day and set a record of 1.75 million per second.


As companies realize the benefits of cloud computing, Rust is gaining momentum, with Dropbox using Rust to rewrite some of its core systems and Mozilla using Rust to build the Firefox browser engine, demonstrating the power of Rust. At Qovery, we believe that Rust can build the future of the cloud.


With Rust, we’ll have a high-performance, portable platform that can easily run on Mac, iOS, Linux, Android, and Windows. Not only does this greatly expand our potential market size, but it also sees many interesting new uses for our LIQUID technology. We are confident that we can complete our Rust journey with strong code, a better product, and an optimistic outlook on the future of Astropad.


We score submitted jobs efficiently, reliably, and securely in enhanced Docker containers. While we dispatched the cluster to Amazon EC2 Container Services (ECS), many of the programs worked in concert with each other and were developed in Rust.


We would like to say a public thank you to the 5 core teams of the Rust language, Mozilla, and the contributors of the many packages in the Rust language ecosystem: we are using Rust to develop new update clients and servers, as well as a number of other software backbones, and hope to continue to expand our use of the language over time. “


Like all of our projects today, it is written in Rust and follows current best practices. The project is configured as a workspace and the core crate provides a common library for discovering and managing firmware from multiple firmware services. Both fwupd and system76-firmware are supported.

Clever Cloud”

For us, these benefits make a strong case that Rust is a reliable building block for production platforms. It’s a piece of code that we don’t have to worry about and that will allow other services to run safely.


The main acceleration we’ve seen in Rust deployments is that deployment tools on different platforms can easily adapt to the language. Agent developers can quickly learn the language and develop integrations with the managed runtime.


Although we have had some setbacks, I would like to highlight that our experience with Rust has been, overall, very positive. It’s a very promising project and we have a solid core and a healthy community.


Every server in our infrastructure is running a trust-based proxy called fly-proxy. This proxy is responsible for accepting client connections, matching them to client applications, applying handlers (e.g. TLS termination), and backhaul processing between servers.


Rust gives us forgiveness. The service has been running in a production environment for 4 months and it handles an average of 40 requests per second with a response time of 10ms. it rarely uses more than 100MB of memory.

There are many more companies, which can be found on the Rust website: Rust Production users[278]

Posted by:CoinYuppie,Reprinted with attribution to:https://coinyuppie.com/30000-words-to-read-about-the-rust-industry/
Coinyuppie is an open information publishing platform, all information provided is not related to the views and positions of coinyuppie, and does not constitute any investment and financial advice. Users are expected to carefully screen and prevent risks.

Like (0)
Donate Buy me a coffee Buy me a coffee
Previous 2021-06-23 03:09
Next 2021-06-23 03:24

Related articles