XM은(는) 미국 국적의 시민에게 서비스를 제공하지 않습니다.

Chinese researchers develop AI model for military use on back of Meta's Llama



<html xmlns="http://www.w3.org/1999/xhtml"><head><title>EXCLUSIVE-Chinese researchers develop AI model for military use on back of Meta's Llama</title></head><body>

Papers show China reworked Llama model for military tool

China's top PLA-linked Academy of Military Science involved

Meta says PLA 'unauthorised' to use Llama model

Pentagon says it is monitoring competitors' AI capabilities

Adds Meta statement, paragraph 11

By James Pomfret and Jessie Pang

Nov 1 (Reuters) -Top Chinese research institutions linked to the People's Liberation Army have used Meta's publicly availableLlama model to develop an AI tool for potential military applications, according to academic papers and analysts.

In a June paper reviewed by Reuters, six Chinese researchers from three institutions, including two under the People's Liberation Army's (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta's Llama as a base for what it calls "ChatBIT".

The researchers used an earlier Llama 2 13Blarge language model (LLM) that Meta META.O , incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.

ChatBITwas fine-tuned and "optimised for dialogue and question-answering tasks in the military field", the paper said. It was found to outperform some other AI models that were roughly 90% as capable as OpenAI's powerful ChatGPT-4.The researchers didn't elaborate on how they defined performance or specify whether the AI model had been put into service.

"It's the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open-source LLMs, especially those of Meta, for military purposes," said Sunny Cheung, associate fellow at the Jamestown Foundation who specialises in China's emerging and dual use technologies including AI.

Meta has embraced the open release of many of its AI models, including Llama. It imposes restrictions on their use, including a requirement that services with more than 700 million users seek a license from the company.

Its terms also prohibit use of the models for "military, warfare, nuclear industries or applications, espionage" and other activities subject to U.S. defence export controls, as well as for the development of weapons and content intended to "incite and promote violence".

However, because Meta's models are public, the company has limited ways of enforcing those provisions.

In response to Reuters questions, Meta cited its acceptable use policy and said it took measures to prevent misuse.

"Any use of our models by the People's Liberation Army is unauthorized and contrary to our acceptable use policy," Molly Montgomery, Meta's director of public policy, told Reuters in a phone interview.

Meta added that the United States must embrace open innovation.

"In the global competition on AI, the alleged role of a single, and outdated, version of an American open-source model is irrelevant when we know China is already investing more than a trillion dollars to surpass the US on AI," a Meta spokesperson said in a statement.

The Chinese researchers include Geng Guotong and Li Weiwei with the AMS's Military Science Information Research Center and the National Innovation Institute of Defense Technology, as well as researchers from the Beijing Institute of Technology and Minzu University.

"In the future, through technological refinement, ChatBIT will not only be applied to intelligence analysis, but also ... strategic planning, simulation training and command decision-making will be explored," the paper said.

China's Defence Ministry didn't reply to a request for comment, nor did any of the institutions or researchers.

Reuters could not confirm ChatBIT's capabilities and computing power, though the researchers noted that its model incorporated only 100,000 military dialogue records, a relatively small number compared with other LLMs.

"That's a drop in the ocean compared to most of these models (that) are trained with trillions of tokens so … it really makes me question what do they actually achieve here in terms of different capabilities," said Joelle Pineau, a vice president of AI Research at Meta and a professor of computer science at McGill University in Canada.

The research comes amid a heated debate in U.S. national security and technology circles about whether firms such as Meta should make their models publicly available.

U.S. President Joe Biden in October 2023 signed an executive order seeking to manage AI developments, noting that although there can be substantial benefits to innovation," there were also "substantial security risks, such as the removal of safeguards within the model".

This week, Washington said it was finalising rules to curb U.S. investment in artificial intelligence and other technology sectors in China that could threaten national security.

Pentagon spokesman John Supple said the Department of Defense recognised that open-source models had both benefits and drawbacks, and that "we will continue to closely monitor and assess competitors' capabilities".

'COOKIE JAR'

Some observers say China's strides in developing indigenous AI, including setting up scores of research labs, have already made it difficult to keep the country from narrowing the technology gap with the United States.

In a separate academic paper reviewed by Reuters, two researchers with the Aviation Industry Corporation of China (AVIC) - which the United States has designated a firm with ties to the PLA - described using Llama 2 for "the training of airborne electronic warfare interference strategies".

China's use of Western-developed AI has also extended into domestic security. A June paper described how Llama had been used for "intelligence policing" to process large amounts of data and enhance police decision-making.

The state-run PLA Daily published commentary in April on how AI could help "accelerate the research and development of weapons and equipment", help develop combat simulation and improve military training efficiency".

"Can you keep them (China) out of the cookie jar? No, I don't see how you can,"William Hannas, lead analyst at Georgetown University's Center for Security and Emerging Technology (CSET), told Reuters. A 2023 paper byCSET found 370 Chinese institutions whose researchers had published papers related to General Artificial Intelligence - helping drive China's national strategy to lead the world in AI by 2030.

"There is too much collaboration going on between China's best scientists and the U.S.' best AI scientists for them to be excluded from developments," Hannas added.



Additional reporting by Katie Paul in New York; Phil Stewart in Washington, Eduardo Baptista in Beijing and Greg Torode in Hong Kong; Editing by Gerry Doyle

</body></html>

면책조항: XM Group 회사는 체결 전용 서비스와 온라인 거래 플랫폼에 대한 접근을 제공하여, 개인이 웹사이트에서 또는 웹사이트를 통해 이용 가능한 콘텐츠를 보거나 사용할 수 있도록 허용합니다. 이에 대해 변경하거나 확장할 의도는 없습니다. 이러한 접근 및 사용에는 다음 사항이 항상 적용됩니다: (i) 이용 약관, (ii) 위험 경고, (iii) 완전 면책조항. 따라서, 이러한 콘텐츠는 일반적인 정보에 불과합니다. 특히, 온라인 거래 플랫폼의 콘텐츠는 금융 시장에서의 거래에 대한 권유나 제안이 아닙니다. 금융 시장에서의 거래는 자본에 상당한 위험을 수반합니다.

온라인 거래 플랫폼에 공개된 모든 자료는 교육/정보 목적으로만 제공되며, 금융, 투자세 또는 거래 조언 및 권고, 거래 가격 기록, 금융 상품 또는 원치 않는 금융 프로모션의 거래 제안 또는 권유를 포함하지 않으며, 포함해서도 안됩니다.

이 웹사이트에 포함된 모든 의견, 뉴스, 리서치, 분석, 가격, 기타 정보 또는 제3자 사이트에 대한 링크와 같이 XM이 준비하는 콘텐츠 뿐만 아니라, 제3자 콘텐츠는 일반 시장 논평으로서 "현재" 기준으로 제공되며, 투자 조언으로 여겨지지 않습니다. 모든 콘텐츠가 투자 리서치로 해석되는 경우, 투자 리서치의 독립성을 촉진하기 위해 고안된 법적 요건에 따라 콘텐츠가 의도되지 않았으며, 준비되지 않았다는 점을 인지하고 동의해야 합니다. 따라서, 관련 법률 및 규정에 따른 마케팅 커뮤니케이션이라고 간주됩니다. 여기에서 접근할 수 있는 앞서 언급한 정보에 대한 비독립 투자 리서치 및 위험 경고 알림을 읽고, 이해하시기 바랍니다.

리스크 경고: 고객님의 자본이 위험에 노출 될 수 있습니다. 레버리지 상품은 모든 분들에게 적합하지 않을수 있습니다. 당사의 리스크 공시를 참고하시기 바랍니다.