I was 15 years old when I first worked with an IBM System/34 computer. It was not a concept; it was a machine. Heavy, robust, with integrated hard drives and eight-inch diskettes spinning as a constant reminder that technology could be touched. That system represented, at the time, the technological supremacy of the United States in the mid-sized business sector. The aspiration of technology was precisely that: tangible machines, built to last, giving physical form to North American industrial and strategic leadership.

The IBM S/34 was one of the symbols of an era in which the United States not only led in research but also in manufacturing and production. Every component reflected a full command of the technological chain: design, fabrication, and use. In those years, U.S. hegemony was inseparable from the power of building concrete, visible, and functional goods.

In parallel, the arrival of the IBM PC revolutionized computing forever. With it, technological power ceased to be confined to corporate rooms and began to enter homes and offices. That was the moment when the United States showed its ability to transform the tangible into a mass-market product, setting a standard that would define the global industry for decades.

Later, Hewlett-Packard (HP) showed the way toward mobile computing with its Palm devices, anticipating the possibility of carrying advanced tools in the palm of one’s hand. That visionary step opened the path that Apple would later perfect, bringing the most advanced technology into iconic devices that became global symbols. But at some point, after these advances, intangible innovation began to surpass hardware as the main engine of profitability. It became more attractive to investors because it promised faster and more scalable returns. The software industry, algorithms, and data management displaced the focus of major corporations and financial capital toward what could multiply without factories. Silicon Valley emerged as the emblem of this new model of innovation: patents, intellectual property, and digital platforms.

Hardware did not disappear (data centers, networks, and chips remained the foundation), but supremacy was increasingly measured by the ability to control and monetize knowledge rather than by producing and assembling machines. That transition strengthened the financial and intellectual power of the United States, while gradually weakening its manufacturing base.

From this evolution came the current gap. A gap that is not theoretical or conceptual: it is measured in factories that reduce their activity, in exports that change ports, and in patents circulating without a material foundation of their own. The U.S. still tries to preserve intellectual leadership, but it has clearly neglected the ground where it was once hegemonic: the production of concrete goods.

Closing that gap requires more than maintaining an advantage in the realm of ideas. It means rebuilding a balance between research and production, between abstract design and the factory that brings it to life. The challenge for the United States is not merely to preserve the memory of past leadership, but to demonstrate that it can still turn ingenuity into systems that are touched, used, and capable of transforming everyday life.

From tangible to intangible

In the 1970s and 1980s, the United States held clear dominance in technological manufacturing. The entire chain (research, design, production, and distribution) was largely within its borders. Companies like IBM, Hewlett-Packard, Texas Instruments, Intel, and Motorola were not only innovators; they were also producers. Their machines and components were designed, built, and distributed domestically, becoming the backbone of industries and households alike. Industrial clusters across the Midwest and the West Coast functioned as engines of a nation that exported both knowledge and concrete goods.

That model began to shift in the 1990s, when corporations embraced globalization. Drawn by lower labor costs in Asia and increasingly efficient supply chains, they moved lines of assembly and manufacturing processes abroad. Japan and South Korea were the first destinations; later, Taiwan and China assumed central roles. This gave rise to offshoring as a dominant strategy: keeping design and research at home, while outsourcing tangible production abroad.

The effect was profound. On one hand, U.S. companies reduced expenses and improved their profit margins. On the other, the country began to lose its advanced manufacturing capacity. The industrial belts that had sustained material power for decades weakened, and many cities in the Midwest saw their technological factories reduce activity or close. What seemed a logical decision in financial terms became, in strategic terms, the erosion of productive autonomy.

At the same time, the very nature of innovation shifted. Profitability was no longer tied to hardware but migrated toward software, management systems, and digital services. Companies like Microsoft and Oracle demonstrated that an operating system or a database could generate far greater benefits than producing an entire computer. Scalability became the key: a program could be replicated infinitely without the need for assembly plants.

Capital aligned with this logic. Investment funds concentrated on firms capable of producing abstract knowledge: algorithms, platforms, patents. Supremacy was no longer measured by the ability to manufacture equipment, but by the accumulation of intellectual property and the speed of financial returns. With the rise of Google, Amazon, and Facebook, the structure of U.S. economic power tilted even further toward the realm of the conceptual.

Hardware remained essential (semiconductors, servers, and networks were the physical foundation of the entire digital revolution), but it slipped into a secondary position in strategic priorities. The United States, once a leader in producing chips and consumer devices, began to depend on factories in Asia to meet demand. Companies like Intel, synonymous with industrial innovation in the 1980s and 1990s, faced increasing challenges to keep pace with competitors such as TSMC in Taiwan and Samsung in South Korea.

The weakening of industrial policy accelerated this trend. While China launched ambitious state-backed investments in technological manufacturing, the United States placed its faith in the market to chart the course. The result was an unbalanced ecosystem: brilliant in abstract innovation, but with growing voids in material production.

Today, the balance is clear. The U.S. retains leadership in research and development of software, digital platforms, and intellectual property. But the material capacity that once allowed it to dominate computing and electronics has eroded. The production of cutting-edge chips is concentrated in Taiwan and South Korea; much of device assembly depends on China; and American industry retains more capacity to design than to manufacture.

This shift does not mean the U.S. has lost relevance, but it does mean that the nature of its technological power has changed. Leadership no longer rests on comprehensive control of the chain but on just part of it. That asymmetry now conditions its strategic position and sets the stage for the competition of the twenty-first century.

The strategic gap

Today, there is a clear and undeniable distance between the United States and China that cannot be explained solely by their economic models, but rather by the way each has turned technology into a tool of power. American innovation translates into global influence over standards, regulations, and digital markets. China’s strength, sustained by its ability to manufacture concrete goods, translates into control over the physical flow of products that sustain those very innovations.

This contrast creates a paradox. The United States sets the rules of knowledge, but depends on external manufacturing to materialize it. Yet the dynamic has already shifted: China is no longer just the workshop of the world, but is advancing toward the integration of design, production, and deployment.

That shift emerges from a vast material base—factories, suppliers, logistics, ports, and control over inputs and processes—now used as a learning platform. The curve is predictable: first assembly, then manufacturing, later design, and finally integration of hardware with software.

China no longer just assembles devices; it develops its own consumer electronics, operating systems tailored to its market, and service ecosystems linked to those products.

In electric vehicles, it combines batteries, motors, and energy management software in a single industrial package, controls key stages of the chain (materials, cells, modules, platform integration), and updates systems by software after delivery.

In telecommunications, it merges network equipment with management and analytics layers. In space innovation, it builds rockets, satellites, and ground stations and links them with data processing for commercial and security uses.

In defense and military development, it integrates materials, telecommunications, and autonomous systems into dual-use applications that range from unmanned vehicles to command-and-control systems assisted by artificial intelligence.

In quantum computing and communications, it advances experimental superconducting processors, quantum communication links, and pilot quantum key distribution networks aimed at building sovereign infrastructure in emerging technologies.

And in artificial intelligence, it develops computing centers, models, and applications spanning industry, urban management, finance, consumer markets, science, and the military, creating a digital ecosystem fed by both social and productive data.

The result is an integrated model: factories that learn and software trained with real data from production, transport, and consumption. The advantage lies in the short cycle between design, prototyping, manufacturing, and deployment. When a production line detects an improvement, it is quickly incorporated into the product and into the algorithm; when the algorithm identifies an efficiency, it translates into immediate material adjustment. That feedback loop, supported by scale and logistical control, reduces costs, shortens timelines, and turns manufacturing into a driver of innovation.

In contrast, the United States deliberately separated these layers to optimize costs and returns: knowledge here, manufacturing abroad. That choice worked while external production was cheap, stable, and politically neutral. Today, it creates an asymmetry of power: the side that integrates design, data, manufacturing, and distribution sets the rhythm.

The gap is no longer “they produce and we design,” but rather who governs the complete articulation between knowledge and execution. That is where the global competition of the twenty-first century is being defined, and it is in that arena that China has begun to establish a different tempo.

The priority of the tangible in the United States

Today, the United States retains a strong position in abstract innovation. Its universities, private laboratories, and corporations are still global references in software, artificial intelligence, biotechnology, and intellectual property. Its research centers continue to set trends that are replicated around the world. Yet this strength does not automatically translate into comprehensive power. Knowledge generates value, but it requires a material foundation to support and project it. That is where the urgency of rebuilding productive capacity becomes evident.

History shows that the U.S. was at its strongest when it maintained an integrated system: research, development, and manufacturing working as one circuit. The aerospace sector during the space race of the 1960s is the clearest example. NASA and its contractors not only designed spacecraft but also built, tested, and launched them. A similar pattern was seen in IBM’s corporate computing ecosystem, where chip design, software, and hardware assembly coexisted domestically. That integration guaranteed strategic autonomy.

The separation between research and production, which began in the 1990s under the logic of globalization, dismantled that circuit. Knowledge stayed at home, but manufacturing moved abroad. Initially, this decision seemed pragmatic: reduce costs and free capital for cutting-edge research. Over time, however, the accumulated effect was different: growing dependence on external chains to produce essential goods. Innovation remained concentrated in the abstract, while production fragmented and shifted overseas.

Today, the need is not to recover a lost hegemony, but to reintegrate technological innovation processes. The country must build a model in which patents are not ends in themselves but inputs for local factories that transform knowledge into concrete products. This is not just about producing microchips again, but about creating an ecosystem in which every advance in artificial intelligence, biotechnology, or efficient transportation is matched with a material reflection: servers, laboratories, pilot plants, prototypes, and assembly lines.

Here, a decisive factor emerges: economic viability as a cultural barrier. Unlike software industries, which can grow rapidly with relatively small investments and almost immediate returns, hardware requires longer cycles, costly prototypes, and large-scale industrial commitments. In this context, fear of non-commercialization becomes a critical obstacle. Many high-tech material projects stall not because they lack value, but because they cannot guarantee profitability at the outset. Yet immediate profitability is not the true measure of worth—the essential factor is the platform of capabilities built with each attempt.

In strategic sectors, the urgency is clear. The U.S. designs processor architectures but depends on Taiwan and South Korea for advanced fabrication. It leads in next-generation battery innovation, but production is largely concentrated in China. In biotechnology, American laboratories produce notable discoveries, but the manufacture of drugs and medical equipment relies on external inputs. In telecommunications, the U.S. regulates global standards, yet much of its infrastructure is built with hardware manufactured abroad. In efficient transportation, it develops electric vehicles and autonomous mobility solutions, but lacks the industrial infrastructure to sustain scaling. In each case, the equation is the same: innovation without full productive integration.

The priority now is to rebuild continuity between the laboratory, the pilot plant, and industrial production. A scientific discovery must have a clear path to domestic manufacturing, rather than becoming trapped in exported patents. A digital innovation must rest on servers, data centers, and equipment developed within the country. This does not imply abandoning global interdependence but redefining it from a more balanced position.

The lesson is clear: recovering strength in manufacturing requires moving beyond immediate viability and rebuilding the innovation cycle. The United States must close the loop that begins with the idea and ends with the product, sustaining it even when the first steps are not commercial. Its future strength will depend on this ability to reintegrate processes and transform knowledge into systems that reinforce its autonomy.

Investment in research: building technology without fear of commercial viability

The central challenge for the United States in rebuilding its material capacity does not lie only in the lack of plants or supply chains, but in how investments are conceived. For decades, both private capital and public agencies favored projects that could demonstrate immediate profitability. That strategy worked well in software and digital services, where an algorithm or an application can scale within months. But it is inadequate for physical technologies, which require long, costly processes with greater uncertainty in their early stages.

The issue is also cultural. The American innovation system has become accustomed to measuring success in terms of early commercialization. This creates a premature filter that discards projects with significant potential simply because they do not promise quick returns. The consequence is evident: fear of non-commercialization prevents the consolidation of new industries.

Overcoming this logic requires designing new investment mechanisms. Applied research in physical technologies must be understood as a platform for cumulative capacities, rather than as a product line to be monetized from day one. Every experimental lab, every pilot plant, and every prototype forms part of a learning process that strengthens technological autonomy and generates long-term benefits.

History demonstrates this lesson. Aerospace, nuclear energy, biotechnology, and the internet all emerged from ventures that endured long incubation periods without immediate commercial returns. It was the willingness to finance immature technologies that allowed entire sectors to consolidate, later giving rise to thousands of companies and millions of jobs.

That same mindset must be applied today to frontier fields: semiconductors, quantum computing, advanced telecommunications, biotechnology, and efficient transportation. In the latter, the effort should not stop at developing electric vehicles but extend to integrated systems of autonomous mobility, intelligent rail networks, and electrification platforms capable of transforming logistics and urban transport. Each step requires factories, specialized materials, and assembly lines, alongside algorithms for control and software for management.

A particularly sensitive case is robotics. The U.S. has produced remarkable research advances in universities and algorithm development, but the absence of a coherent strategy for production and deployment has prevented it from securing a solid position. Robotics brings together sensors, actuators, advanced materials, assembly lines, and control software. Failing to invest consistently in this field out of fear of delayed commercial viability means missing a critical domain with applications in industry, healthcare, and defense.

For this reason, the country needs instruments that reduce the pressure of immediate profitability: shared-risk funds, tax incentives for pilot plants, public procurement programs that guarantee early demand, and public-private consortia capable of sustaining high-risk technological projects. What matters is giving researchers and entrepreneurs a clear pathway from laboratory to industrial scale, without being trapped in the short-term cycle.

In short, investing in applied research means accepting that success cannot be measured only by rapid sales, but by the creation of a material capacity that supports conceptual innovation. The United States must recover its willingness to build, test, and scale even when the first results are not commercial. Only with that vision can it reconstitute an industrial ecosystem capable of sustaining leadership in the decades ahead.

A step forward

The trajectory of the United States shows that its strongest innovations emerged when research and production worked as one. That model must now be restored in a far more competitive world—not to claim hegemony, but to ensure that intellectual creation rests on a material base capable of sustaining it.

The step forward lies in integration. Semiconductors, quantum computing, efficient transportation, biotechnology, and robotics do not advance by ideas lacking factories, nor by factories without ideas. They demand a cycle where knowledge becomes matter and matter generates new knowledge. That is the real ground of competition in the twenty-first century, and the place where the United States must demonstrate that invention and production can once again move in unison.

The future will not be defined by brilliance alone, but by the capacity to build. To move forward means closing the distance between the laboratory and the assembly line, between the concept and the machine. That integration is not optional; it is the condition for technological autonomy and for a more balanced global order.