This update is slower, mainly due to eye issues.
With the progression of presbyopia, my eyes can’t adapt to the three different distances of a phone, laptop, and external screen. Not only are the prescriptions different, but the pupil distance and height also vary, and progressive lenses have severe field-of-view width issues. There is no ideal solution.
I’ve heard that many people in their forties and fifties have early artificial lenses, allowing them to live and exercise without glasses. So, I consulted a doctor. The doctor said it’s a bit early for me to do it now and mentioned a new type of glasses called Neurolens. The basic principle of this is that most adults have a certain misalignment when looking at screens, so a bit of prism is added to the lenses. Thus, my glasses would have concave lenses (for myopia) + cylindrical lenses (for astigmatism) + prisms (for misalignment).
Misalignment between the eyes is inevitable because the positions of the two eyes are different, leading to a dominant eye and a non-dominant eye. Our vision is the result of synthesis by the brain. In outdoor environments, the brain’s synthesis workload is low, so the eyes don’t tire easily. However, when looking at screens, a large number of sharp characters require real-time alignment and synthesis by the brain, leading to headaches and other issues due to the carbon-based GPU overheating.
Neurolens is reportedly still quite controversial because the human eye and brain have evolved to work together over a long period, and no one knows what the result of “brutally” altering the vision of both eyes will be. But how will humans evolve to adapt to spending half their day looking at electronic screens?
Since the topic is about changing the world, let’s extend from semiconductors to discuss the current limitations of human technology.
VI. Establishment of the MOS Model and Limitations of Experience
As mentioned earlier, MOSFETs are the cornerstone of our information age. To accurately simulate and design the behavior of these devices, robust computational models are needed. Berkeley SPICE (Simulation Program with Integrated Circuit Emphasis) and BSIM (Berkeley Short-channel IGFET Model) were born in this context.
As far as I know, the establishment of these models is mainly based on classical semiconductor physics theories, such as carrier transport theory and the drift-diffusion model. The BSIM model contains a large number of empirical formulas, most of which are obtained by fitting experimental data. These formulas can describe the electrical behavior of MOSFETs in different operating regions (such as subthreshold, linear, and saturation regions).
With the advancement of technology and the reduction in device size, the BSIM model has undergone multiple updates and improvements. Over time, the BSIM model gradually incorporated considerations of some quantum mechanical effects, especially in later versions like BSIM4 and BSIM-CMG (for new devices like FinFETs). As short-channel effects became apparent, the model was expanded to better describe these effects.
These sound quite interesting, but in practice, a large number of researchers are engaged in tedious work. Some physics PhDs jokingly say that solid-state physics is just about measuring resistance, combining various materials, and applying various electromagnetic fields to repeatedly measure resistance. Due to theoretical physics, especially high-energy physics, hitting a dead end and struggling to secure funding, many mathematically talented PhDs are forced to join condensed matter physics, becoming modern kiln workers.
There is a noticeable inversion in society: the closer the use of technology is to the basic theoretical end, the lower the income: software > hardware > materials > pure theory. We won’t discuss whether this is fair; perhaps from the perspective of industry income-to-investment ratio, this phenomenon seems reasonable.
Is pure theory really that awkward? Where is the gap?
So, the question arises: since quantum mechanics is already complete in the microscopic domain, why do we have to use empirical models instead of directly constructing a purely mathematical perfect model based on quantum mechanics?
VII. Why is a Perfect Model So Hard to Achieve?
The answer lies in computational complexity and multi-scale problems.
Quantum mechanics provides a framework for accurately describing electron behavior, but when dealing with actual semiconductor devices, we face a complex system of multi-scale and many-body interactions. For a complex semiconductor device, solving the full quantum mechanical Schrödinger equation, especially when considering electron-electron and electron-phonon interactions, becomes extremely complex; the electron behavior in semiconductor devices involves many-body interactions. Exact solutions for many-body problems (such as quantum Monte Carlo methods) require enormous computational resources, making it infeasible even on today’s most powerful computers.
Semiconductor devices involve multi-scale problems from the atomic level (nanoscale) to the circuit level (macroscale). Combining these different scales is usually done by coupling different physical models, which presents issues of quantum-classical coupling and time scales. From femtosecond electron transitions to microsecond thermal effects, the computational load is dauntingly large.
VIII. Practicality is King
Despite the significant improvements in AI and high-performance computing capabilities in recent years, a key issue is how to make models not only theoretically perfect but also practically useful. For engineering applications, models need to be not only accurate but also provide results within a reasonable time frame, which involves:
• Parameter extraction and model tuning: Even with a “perfect model,” it still needs to be parameterized using actual experimental data to ensure it accurately describes different process nodes and device structures. This parameter extraction itself is a complex process. • Computational time and resources: Although humans now have powerful cloud computing capabilities, for a complete semiconductor device simulation, including all quantum mechanical effects, the computational time and resource demands are still too high, especially in design environments requiring rapid feedback. The trade-off between computational efficiency and accuracy of the model remains a practical issue.
In engineering applications, designers are usually more concerned with the practicality of the model, i.e., whether it can provide sufficiently accurate results in a short time for large-scale circuit design and simulation. Therefore, although a more perfect model is theoretically possible, in engineering practice, it may not necessarily be the optimal choice: engineers are accustomed to using current models, and the toolchains and design processes are built around these models. Introducing entirely new, more complex models may require redesigning these processes and conducting extensive validation and training.
The development of science often outpaces the actual needs of practical applications. For instance, while we can theoretically construct more perfect models, the current design and manufacturing processes may not require such high-precision models. In such cases, pursuing “perfection” might actually increase unnecessary costs and complexity.
A similar example is the invention and improvement of the steam engine. At the beginning of the Industrial Revolution, Watt improved Newcomen’s steam engine, laying the foundation for the widespread application of steam-powered machinery. However, Watt’s work was not based on the first principles of thermodynamics but rather on extensive experiments and engineering experience. It wasn’t until the mid-19th century that the first and second laws of thermodynamics gradually took shape, enhancing our understanding of the steam engine’s working principles. Similarly, although quantum mechanics provides us with a theoretical foundation, the complexity in practical applications forces us to rely on empirical formulas and simplified models.
Moreover, phenomena in semiconductor devices involve quantum effects at the nanoscale to classical physics at the macroscopic scale, and unifying these different scales of physical phenomena into one model is a significant challenge in itself.
9. Potential Future Shift: A Leap from Experiment to Computation
Nevertheless, with the rapid development of computational technology, is the problem different now? Could solid-state physics be on the brink of a significant shift from experiment-driven to computation-driven scientific discovery?
In recent decades, the combination of first-principles calculations (such as Density Functional Theory, DFT) and machine learning has begun to show potential in materials science. Scientists no longer rely solely on repeated laboratory experiments but use computer simulations to predict the behavior of new materials by modeling their electronic structures and physical properties. This computation-driven research approach not only accelerates the material discovery process but also helps us better understand experimental results, gradually building models closer to “perfection.”
Computation-driven scientific research might bring about a similar revolution in the 21st century, enabling precise predictions and designs of materials and devices without relying on large experimental setups.
This shift is not just a change in research paradigms but could also have widespread impacts on human society.
For example, pure silicon is key to many modern technologies, from chips to solar cells. However, its properties as a semiconductor are far from ideal. Silicon’s thermal conductivity and electron mobility are not optimal. A potential solution is to introduce new materials with high carrier mobility into the channel region, such as gallium arsenide, indium arsenide, and gallium antimonide. Electrons can move more than ten times faster in these materials, allowing these small switches that impact our world to switch faster. Equally important, as electrons move faster, chips can operate at lower voltages, improving energy efficiency and reducing heat generation.
With computation-driven approaches, the discovery of new materials will become more efficient. For instance, high-performance battery materials, catalysts, superconductors, and topological materials can be screened and optimized through high-throughput computations, accelerating their applications in energy, environmental protection, and other fields. Similar to the revolutions brought by human genome research in archaeology, biochemistry, and medicine, the materials genome might also be an intriguing direction.
As supercomputing technology becomes more widespread, the shift from experimental physics to computational physics could spark a new wave of technological revolution. This will not only change the way researchers work but also impact various fields from chip design to materials science, from energy development to environmental protection. Future scientists will rely more on computer simulations and algorithm optimization rather than traditional laboratory experiments, profoundly changing the speed and mode of scientific discovery.
10. However
Reflecting on the invention of gunpowder, from its initial use as a simple firearm to its large-scale application, it gradually changed the landscape of warfare, politics, and geopolitics. However, the early application of firearms was not based on a profound understanding of explosive chemistry and aerodynamics but rather on the accumulation of experience. Similarly, the semiconductor MOS model reflects this experience-driven path of technological evolution to some extent.
With the continuous advancement of computational technology, there may be a new era in the future where computation-driven scientific discovery gradually replaces experiment-driven research paradigms. However, humanity’s current computational and scientific capabilities are far from being able to compute slightly larger-scale many-body systems from the quantum level, such as simulating a seed or even a bacterium.
Is there a visible path to solving this problem? Unfortunately, due to the inability to establish a complete relationship between microscopic quantum states and macroscopic measurable physical quantities, that is, the insurmountable gap between quantum mechanics and classical physics, the answer can only be despairing.
One can only say that your human science is still too backward.