To Lead in AI, the US Needs a Silicon Revolution

One thing that US politicians seem to agree on, despite a great many other differences, is that the country needs to lead technologically to maintain a position of economic and geopolitical preeminence. How to ensure such leadership will be a critical question for the next US president and his or her staff.

The past two administrations have taken some extraordinary steps to maintain an edge in both chipmaking and AI, two fields that are inextricably and intricately entwined. The US and its allies have restricted exports of cutting-edge chips and silicon-manufacturing equipment to key geopolitical rivals (aka China). In 2022, the US also passed the CHIPS Act, legislation that will pour $280 billion into bringing more microchip manufacturing back to American soil.

Laurie E. Locascio, undersecretary of standards and technology at the Department of Commerce and director of the National Institute of Standards and Technologies, helps oversee the government’s chip investments. She tells WIRED that it is crucial to invent new chip designs and manufacturing techniques to ensure the US’s technological preeminence in AI. She adds that chip packaging—the process of combining components in new ways to boost performance—may be especially vital to the next wave of AI.

Locascio recently sat down with WIRED senior writer Will Knight at the Commerce Department’s headquarters in Washington, DC. Their conversation has been lightly edited for length and clarity.

How have generative AI and ChatGPT changed the US government’s microchip priorities?

Materials and substrates are really critical too, and we need to think about how new types of materials can fit into new architectures. Hyperscale computing—how you connect chips together inside data centers—is also important.

Turning to your other role, as the director of NIST, you are also responsible for the development of standards and practices around AI. Where does the work of the AI Safety Institute, which is supposed to ensure AI is not deployed dangerously, stand?

The work is very early, but we are developing relationships with other AI safety institutes around the world, and we are currently developing guidelines for AI model testing.

A year ago, we heard a lot of talk of artificial general intelligence and AI posing an existential threat. Is that something NIST is actively studying?

I would say we are involved in every conversation that people are having, but we don’t have a team specifically focused on that. We are always alert for new threats, but, as things stand, there is very little rigorous research on this topic. We will continue to listen to the AI community as technology advances and inform our programs with facts and empirical evidence.

Today’s AI benchmarks seem like imperfect ways to measure progress in AI. It also seems that we don’t have a good way to measure the impact AI is having within companies and on the economy. Could NIST help with these things?

I don’t have a good measurement for the potential economic benefit of AI at my fingertips. If you have one, I’d love to see it. We want to innovate but we need to do it responsibly. We need to do it in a way that people can trust the technology so we can take advantage of it. We just have to make sure we can use it in a way that serves us well.

Source : Wired