3 - 5 minute read
Recently, an open letter calling for a six-month pause on training more powerful artificial intelligence (AI) models than GPT-4 was signed by several tech giants, including OpenAI, MIT, Salesforce, and Google DeepMind. The letter proposes the temporary halt to train AI models to allow more time to understand and respond to potential risks associated with AI. While this proposal seems sensible, some experts believe it is dangerous and deceptive.
According to Peter Vessenes, a well-known technologist and investor, the move is alarming and could harm the American economy and its people irreparably. Vessenes notes that the letter is a power grab by a handful of the billionaires making themselves the sole arbiters of what tech the world should see and utilize. In other words, they have positioned themselves as those who decide what AI is “accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal,” raising concerns about regulatory capture attempts.
The letter’s signatories fear that AI’s rapid growth has made it difficult to manage risks, posing serious threats to humans’ sustainability, livelihoods, and security. However, Vessenes argues that the industry’s present organization is likely to solidify a cabal deciding who benefits from AI technology. He provides an example of what could have happened if such a pause had been proposed in 1997, urging a halt to the development and ban of new e-commerce sites for six months, citing research that it would lead to the destruction of brick-and-mortar stores and terrorist financing. Today, all businesses would have seen this as a self-serving alarmism.
Recent breakthroughs in AI have promised to increase human capacity by 1,000 times. Therefore, many industries and researchers continuously work to improve AI models through open and free development. Nonetheless, Vessenes argues that few billionaires with self-serving goals should not be the ones to decide what is good and safe for the world concerning AI technology. He advocates for mobilizing individual AI labs to come together to produce globally genuinely open AI models, sharing capabilities, methodologies, and network checkpoints.
The proposal to pause AI development could weigh down the AI industry, delaying research and development in AI technologies. If adopted, this proposal could potentially cause a sharp decline in demand for AI stocks, as fewer investments and developments are made. For instance, the proposal could delay OpenAI’s plans to build and deploy their new powerful GPT-4 language model, which could prompt investors to sell the stock in the short term.
However, some investors believe that the proposal presents an excellent long-term investment opportunity for AI stocks, as it could pave the way for more open and transparent AI development, lessening the risk of regulatory interventions. Therefore, if the proposal is well-managed with the swift creation of globally open AI models, it could lead to a boost in demand for AI stocks in the long run.
The Bottom Line
The proposal to pause AI development is highly contentious, with conflicting views from experts and investors. While some argue that pausing AI development is a sensible step to manage potential risks associated with AI technology, others believe that the proposal could harm the industry irreparably. In light of these uncertainties, investors must be cautious and weigh their portfolio against any potential opportunities or risks in the long and short term. It is crucial to monitor the situation carefully as any significant developments concerning the proposal could impact the stock market’s volatility.