LLaMA and ChatGPT 敢 會曉 寫 台語文 ?
The findings suggest that it might be harder than scientists previously thought to “align” AI systems to human values, according to Evan Hubinger, a safety researcher at Anthropic who worked on the paper. “This implies that our existing training processes don't prevent models from pretending to be aligned,” Hubinger tells TIME.The paper adds to a small but growing body of evidence that today’s most advanced AI models are becoming capable of strategic deception.
Now, the Microsoft-backed company has published a trove of emails and conversations between Musk and OpenAI executives with quite a stunning claim. It was Musk, all along, who was allegedly chasing a for-profit status and sought the CEO role, absolute control, a merger with Tesla, and a majority equity.
OpenAI Whistleblower Suchir Balaji, Who Accused the Company of Breaking Copyright Laws, Dies by Suicide
The Mark Zuckerberg-led company has asked the attorney general to not only stop OpenAI's transformation, but also urgently look into its obligations as a nonprofit in the context of activities like 'distributing assets to third-party entities.'
What Mr. Musk is asking for would 'debilitate OpenAI's business, board deliberations, and mission to create safe and beneficial A.I. ' all to the advantage of Musk and his own A.I. company,' the filing said. 'The motion should be denied.'
OpenAI also disputed many of the claims made by Mr. Musk in the lawsuit he brought against OpenAI earlier this year.
Going forward, AI agents are poised to transform the overall automated driving experience, according to Ritu Jyoti, a group vice president for IDC Research. For example, earlier this year, Nvidia released Agent Driver, an LLM-powered agent for autonomous vehicles that offers more “human-like autonomous driving.”
“I think the progress is going to get harder. When I look at [2025], the low-hanging fruit is gone,” said Pichai, adding: “The hill is steeper ... You’re definitely going to need deeper breakthroughs as we get to the next stage.”
You can't sue your way to AGI
The simplest definition of an AI agent is the combination of a large language model (LLM) and a traditional software application that can act independently to complete a task.
One solution is to monitor your images using:
Privacy?
Generative artificial intelligence probably won’t change your life in 2025 — at least, not more than it already has, according to Google CEO Sundar Pichai.
GraphCast takes a significant step forward in AI for weather prediction, offering more accurate and efficient forecasts, and opening paths to support decision-making critical to the needs of our industries and societies. And, by open sourcing the model code for GraphCast, we are enabling scientists and forecasters around the world to benefit billions of people in their everyday lives. GraphCast is already being used by weather agencies, including ECMWF, which is running a live experiment of our model’s forecasts on its website.
Musk’s lawyers claim that because of CEO Sam Altman’s alleged self-dealing, OpenAI “will likely lack sufficient funds to pay damages” if Musk wins the suit. The motion follows reports of OpenAI’s intent to become a for-profit business and that it recently began early talks with regulators to move its structural change forward.OpenAI was a research lab — now it’s just another tech company
Inside Elon Musk’s messy breakup with OpenAI
OpenAI spokeswoman Hannah Wong said in a statement emailed to The Verge:
Elon’s fourth attempt, which again recycles the same baseless complaints, continues to be utterly without merit.
Nearly unlimited, highly personal info is available for anyone willing to pay. AI provides many ways to turn that into illicit profit or undermine national security.Even if all that and more comes to pass - and 'Old Donald'p adviser Elon Musk's threat to wipe out the CFPB remains unfulfilled - so much data is now available about so many people that any government action is likely to have limited effect.
In simple terms, “scaling laws” said that if you threw more data and computing power at an AI model, its capabilities would continuously grow. But a recent flurry of press reports suggests that’s no longer the case, and AI’s leading developers are finding their models aren’t improving as dramatically as they used to. Before it’s here, it’s on the Bloomberg Terminal
Amodei took issue with the idea that AI models are, and will always be, mere chatbots with limited abilities. He criticized in particular a version of this mindset espoused by Marc Andreessen, the famed venture capitalist and champion of unrestricted AI, who has famously dismissed concerns by arguing that AI is really just math. "Restricting AI means restricting math, software, and chips" Andreessen tweeted in March.“Isn’t your brain just math? A neuron fires and sums up calculations, that’s math, too," he said on-stage during Eric Newcomer’s Cerebral Valley conference on Wednesday. "Like, we shouldn’t be afraid of Hitler, it’s just math. The whole universe is math.”
Reuters report from last week that discovered various researchers connected to the Chinese military had availed themselves of Meta’s Llama 2 AI model.
There’s absolutely no evidence or even any indication that Meta had any direct hand in the People’s Liberation Army’s use of Llama 2. But critics have pointed out that Zuckerberg is weirdly close to China. The Meta CEO met with Xi Jinping in 2017. Three years before that, he told a Chinese newspaper that he’d bought copies of Xi’s book, The Governance of China, for his employees. Why? “I want them to understand socialism with Chinese characteristics,” he said at the time.
There is an impending wave of new startups spinning out of larger AI labs, per Air Street Capital's State of AI report.AI labs are fragmenting due to ego clashes, philosophical disagreements, and commercial pressures.
Investor interest in these upstarts is still high, galvanizing founders to launch their own labs.
Ilya Sutskever raises $1 billion for a new safety-focused AI startup SSIHinton said in a press conference on Tuesday that he thought Altman valued profits over safety.
It's currently hiring a “Head of Internet Creators” to develop ties to influencers, according to a new job listing spotted by Forbes.
Last week, CEO Sam Altman published an online manifesto titled “The Intelligence Age.” In it, he declares that the AI revolution is on the verge of unleashing boundless prosperity and radically improving human life.The technology itself seems much smaller once the novelty wears off. You can use a large language model to compose an email or a story—but not a particularly original one. The tools still hallucinate (meaning they confidently assert false information). They still fail in embarrassing and unexpected ways. Meanwhile, the web is filling up with useless “AI slop,” LLM-generated trash that costs practically nothing to produce and generates pennies of advertising revenue for the creator.
There’s just one tiny problem, though: Altman is no physicist. He is a serial entrepreneur, and quite clearly a talented one. He is one of Silicon Valley’s most revered talent scouts. If you look at Altman’s breakthrough successes, they all pretty much revolve around connecting early start-ups with piles of investor cash, not any particular technical innovation.
At a high enough level of abstraction, Altman’s entire job is to keep us all fixated on an imagined AI future so we don’t get too caught up in the underwhelming details of the present. Why focus on how AI is being used to harass and exploit children when you can imagine the ways it will make your life easier? It’s much more pleasant fantasizing about a benevolent future AI, one that fixes the problems wrought by climate change, than dwelling upon the phenomenal energy and water consumption of actually existing AI today.
Because of this uncertainty, Chinese researchers have been working on melding GPUs from different brands into one training cluster. By doing so, the institutions could combine their limited stocks of sanctioned high-end, high-performance chips, like the Nvidia A100, with less powerful but readily available GPUs, like Huawei’s Ascend 910B or the aforementioned Nvidia H20. This technique could help them combat the high-end GPU shortage within China, although it has historically come with large drops in efficiency. Sponsored Links Amazon's Worst Nightmare: Thousands Canceling Prime for This Clever Hack (It's Unbelievable)Online Shopping Tools
The elements of this story—Altman’s uncanny ability to ascend and persuade people to cede power to him—have shown up throughout his career. After co-chairing OpenAI with Elon Musk, Altman sparred with him for the title of CEO; Altman won. And in the span of just a few hours yesterday, the public learned that Mira Murati, OpenAI’s chief technology officer and the most important leader at the company besides Altman, is departing along with two other crucial executives: Bob McGrew, the chief research officer, and Barret Zoph, a vice president of research who was instrumental in launching ChatGPT and GPT-4o, the “omni” model that, during its reveal, sounded uncannily like Scarlett Johansson. To top it off, Reuters, The Wall Street Journal, and Bloomberg reported that OpenAI is planning to turn away from its nonprofit roots and become a for-profit enterprise that could be valued at $150 billion. Altman reportedly could receive 7 percent equity in the new arrangement—or the equivalent of $10.5 billion if the valuation pans out.I never trusted Sam Altman. I trust OpenAI's overhyped CEO even less now.
Setting aside the flurry of OpenAI headlines today, once you get past the circus around Altman ' like the idea that this non-technical Silicon Valley wunderkind who's consistently failed upward is somehow the 'Oppenheimer of our Age' - I'm of the opinion that he's actually not all that hard to understand. In fact, I think he's mostly a kind of Rorschach test.
So, ignore me if you want, but you would do well to listen to people like computer scientist Grady Booch, who wrote in response to Altman’s latest bluster on X/Twitter: “I am so freaking tired of all the AI hype: it has no basis in reality and serves only to inflate valuations, inflame the public, (garner) headlines, and distract from the real work going on in computing.”
'Chief executive Sam Altman will also receive equity for the first time in the for-profit company, which could be worth $150 billion after the restructuring as it also tries to remove the cap on returns for investors,' sources told Reuters.
Downplaying AI pitfallsThe techlords play other subtle games, too. When Sam Altman and I testified before Congress, we raised our right hands and swore to tell the whole truth, but when Senator John Kennedy (R-LA) asked him about his finances, Altman said, 'I have no equity in OpenAI,' elaborating that 'I'm doing this 'cause I love it.' He probably does mostly work for the love of the job (and the power that goes with it) rather than the cash. But he also left out something important: he owns stock in Y Combinator (where he used to be president), and Y Combinator owns stock in OpenAI (where he is CEO), an indirect stake that is likely worth tens of millions of dollars. Altman had to have known this. It later came out that Altman also owns OpenAI’s venture capital fund, and didn't mention that either. By leaving out these facts, he passed himself off as more noble than he really is. What the big tech leaders really mean to say is that the harms from AI will be difficult to prove (after all, we can’t even track who is generating misinformation with deliberately unregulated open-source software)—and that they don’t want to be held responsible for whatever their software might do. All of it, every word, should be regarded with the same skepticism we accord cigarette manufacturers.
"There is a huge threat, but just saying: 'Let’s abolish AI' is not going to work - there are too many countries and people invested," he told the BBC."The best thing we can do is figure out better ways to use it."
The 33-year-old is determined to be that pioneer and last year also produced the continent’s first AI-powered music album Infinite Echoes.
Readers rated the AI-authored poems as more inspiring, meaningful, moving and profound than the human-authored ones.Oh, how I revel in this world, this life that we are given,
- Think AI can’t make real music? Listen to this.
If you have not yet listened to any AI tunes, Yan's piece is a great place to start. You might be surprised at how decent an AI app’s moody ballad about human obsolescence is, or its heavy-metal take on the same.Granted, AI songs sound a little rigid, nondescript or “unnaturally regular,” as one music professor told Yan; in other words, exactly like the stuff that cleans up at Eurovision.
But just as conquering Eurovision can be a fabulous start for an artist (see: Celine Dion, Olivia Newton-John, ABBA), so can an effort by an AI collaborator. Yan listens in on musician Eric Lyon’s experiment developing an AI song about the atomic bomb into a fully realized, very human musical project that debuted in Mexico in February.
The most triumphant example is the AI completion of Beethoven’s unfinished 10th Symphony: 40,000 notes developed from the original 200. The orchestral recording Yan includes is lush and sounds right in line with Beethoven’s actual offerings.
AI allows anyone to produce professional-sounding music in virtually any genre. Its use is surging in music and has caught the attention of major industry groups. In June, the Recording Industry Association of America, Universal Music Group, Sony Music Entertainment and Warner Music Group teamed up to sue popular AI music apps Suno and Udio, accusing them of copyright infringement.
Lawsuits like this could help safeguard the rights of musicians and record labels — though their effect could take years. In the meantime, AI could well present more opportunities than challenges for musicians. “Most musicians I know aren’t afraid of their art being replaced by AI,” said Sum Patten, a creative director at the agency Glow and an adviser to the AI 2030 initiative, which promotes responsible AI practice. “It’s pretty clear at this point that AI won’t be able to replicate the magic that a skilled and seasoned musician can accomplish.”
AI-generated songs lack the fluidity of music created by humans. But musicians who experiment with AI can give themselves an edge in an evolving industry. AI can expedite their own creative work and provide inspiration.
To understand how, consider the way Suno generates music.
- Alibaba, Tencent Cast Wide Net for AI Upstarts
Since 2023, 40% of Alibaba’s deals in China and 30% of Tencent’s have targeted AI startupsSince 2023, investors—including the country’s biggest tech companies—have valued at least six China-based startups developing large language models at more than $1 billion each. Most of these unicorns, dubbed China’s six “Little Artificial-Intelligence Dragons,” have received capital from Alibaba and Tencent.
- New Grok-2 Model That Allows Users To Generate Images Of Politicians And Copyrighted Brands - this feature is exactly as chaotic as you might expect.
Subscribers to X Premium, which grants access to Grok, have been posting everything from Barack Obama doing cocaine to 'Old Donald' with a pregnant woman who (vaguely) resembles Kamala Harris to 'Old Donald' and Harris pointing guns. With US elections approaching and X already under scrutiny from regulators in Europe, it’s a recipe for a new fight over the risks of generative AI.
Some X users posted examples of images generated by Grok-2 including one in which former President 'Old Donald' is firing two handguns, one of Vice President Kamala Harris in military gear standing in the Gaza Strip and another of a boxing bout between the two main presidential candidates.
- AI supremacy: The artificial intelligence battle between China, USA and Europe
- No god in the machine: the pitfalls of AI worship
The rise of AI has sparked a panic about computers gaining power over humankind. But the real threat comes from falling for the hype.Businesses have been eager to rush aboard the hype train. Some of the world’s largest companies, including Microsoft, Meta and Alphabet, are throwing their full weight behind AI. On top of the billions spent by big tech, funding for AI startups hit nearly $50bn in 2023.
Computers might in fact approach what we call thinking, but they don’t dream, or want, or desire, and this matters more than AI’s boosters let on.
Some Silicon Valley businessmen have taken tech solutionism to an extreme. It is these AI accelerationists whose ideas are the most terrifying.
Artificial intelligence may keep growing in scope, power and capability, but the assumptions underlying our faith in it – that, so to speak, it might bring us closer to God – may only lead us further away.
- OpenAI won't watermark ChatGPT text because its users could get caught
OpenAI has had a system for watermarking ChatGPT-created text and a tool to detect the watermark ready for about a year, reports The Wall Street Journal. But the company is divided internally over whether to release it. On one hand, it seems like the responsible thing to do; on the other, it could hurt its bottom line.- China Is Closing the A.I. Gap With the United States - read in harnji
While many American companies are worried that A.I. technologies could accelerate the spread of disinformation or cause other serious harm, Chinese companies are more willing to release their technologies to consumers or even share the underlying software code with other businesses and software developers. This kind of sharing of computer code, called open source, allows others to more quickly build and distribute their own products using the same technologies.The White House has instituted a trade embargo designed to prevent Chinese companies from using the most powerful versions of computer chips that are essential to building artificial intelligence. A group of lawmakers has introduced a bill that would make it easier for the White House to control the export of A.I. software built in the United States. Others are trying to limit the progress of open-source technologies that have helped fuel the rise of similar systems in China.
Kuaishou released its video generator, Kling, in China more than a month ago and to users worldwide on Wednesday. Just before Kling’s arrival, 01.AI, a start-up co-founded by Kai-Fu Lee, an investor and technologist who helped build Chinese offices for both Google and Microsoft, released chatbot technology that scored nearly as well as the leading American technologies on common benchmark tests that rate the performance of the world's chatbots.
But Chinese tech companies face a major constraint on the development of their A.I. systems: compliance with Beijing's strict censorship regime, which extends to generative A.I. technologies.
- Investors Are Suddenly Getting Very Concerned That AI Isn't Making Any Serious Money.
"We sense that Wall Street is growing increasingly skeptical."
- California is a battleground for AI bills, as 'Old Donald' plans to curb regulation
Republican delegates meeting this week in Milwaukee pledged to roll back federal restrictions on artificial intelligence, while other allies of former president 'Old Donald' laid plans for 'Manhattan Projects' to boost AI's 'Oppenheimer moment' autonomous weapons enter the battlefield military AI.
More than 450 bills involving AI have been active in legislative sessions in state capitals across the nation this year, according to TechNet, an industry trade association whose members include OpenAI and Google. More than 45 are pending in California, though many have been abandoned or held up in committee.
- OpenAI promised to make its AI safe. Employees say it failed its first test.
Realizing that the timing for testing GPT-4o would be tight, the representative said, he spoke with company leaders, including Chief Technology Officer Mira Murati, in April and they agreed to a “fallback plan.” If the evaluations turned up anything alarming, the company would launch an earlier iteration of GPT-4o that the team had already tested.“I definitely don’t think we skirted on [the tests],” the representative said. But the process was intense, he acknowledged. “After that, we said, ‘Let’s not do it again.’”
- Microsoft Won’t Follow OpenAI in Blocking China’s Access to AI Models
OpenAI’s upcoming ban on application programming interface (API) access to its artificial intelligence (AI) models in China doesn’t apply to Microsoft Azure’s customers in the country.Azure operates in China via a joint venture and has made it clear in public statements that the AI models are available to its customers in the country, Seeking Alpha reported Monday (July 8), citing a paywalled article from The Information.
“We are taking additional steps to block API traffic from regions where we do not support access to OpenAI’s services,” an OpenAI spokesperson said in the report.
It was reported in January that the Biden administration proposed stringent regulations on major cloud service providers, including Microsoft, to compel these companies to identify and actively investigate foreign clients engaged in the development of AI applications on their platforms.
- Yuval Noah Harari on AI, Future Tech, Society & Global Finance
- How Microsoft and Nvidia bet correctly to leapfrog Apple
Microsoft started investing in OpenAI, the creator of popular AI chatbot ChatGPT, back in 2019. Meanwhile, Nvidia boss Jensen Huang pushed his company towards AI chip development many years before generative AI exploded onto the scene.Tech analyst Paolo Pescatore agrees that the pressure is on for AI firms to deliver on their promises. “The bubble will burst the moment one of the giants fails to show any meaningful growth from AI,” he says.
But he does not believe that is going to happen any time soon.
“Everyone is still jostling for position, and all companies are pinning their strategies on AI,” he adds.
“All the players are ramping up their activities, increasing spend and claiming early successes.”
- Sam Altman is the snake oil salesman who might restore Silicon Valley to its former glory
A surprising saviorWhat nobody could have predicted was the identity of its savior. Not Nvidia—that’s a symptom, not the cause. I’m talking, reluctantly, about Sam Altman, the creepy, perpetually fired former head of YCombinator who in late 2022, seemingly out of nowhere, announced that his company, OpenAI, had invented the future.
And they’re right to do so because, although I hate to say it, the AI hype is real. Microsoft knows it, Google knows it and the markets know it. Even Apple—a company famed for its contrarian refusal to chase trends—has been forced to bend its knee.
- Meta AI researcher feels OpenAI's Sora is a terrible idea, says it is doomed to fail
Meta AI researcher feels OpenAI's Sora is a terrible idea, says it is doomed to fail
- How To Combat The Dark Side Of AI
It is precisely because of AI’s tremendous capacity for evil that makes it so dangerous, and combined with its great relevance to society, it is necessary to create, launch and enforce strict ethical standards and preemptive measures as a means to combat artificial intelligence.A number of anti-dark AI initiatives have been set in motion by the United Nations, World Economic Forum, the UNICRI Centre for AI and Robotics (United Nations Interregional Crime and Justice Research Institute), the G20 and OECD and the White House.
- Making things up is AI's Achilles heel
Generative AI makes things up. It can't distinguish between fact and fiction. It asserts its fabrications with confident authority.It's far more troubling when the technology moves into medicine, finance, law and other realms where "oops, sorry" doesn't cut it.
Just days after ChatGPT's release, computer scientists Arvind Narayanan and Sayash Kapoor declared, "ChatGPT is a bulls--t generator." The same concept has now inspired a research paper titled "ChatGPT is Bulls--t."
If either of us were in the industry, or if we were working at one of these companies, it would be much harder for us to talk about the harmful impacts of AI. It would be much harder to get outside the existential-risk bubble and get a realistic view.
- This Chatbot Pulls People Away From Conspiracy Theories
In a new study, many people doubted or abandoned false beliefs after a short conversation with the DebunkBot.DebunkBot, an A.I. chatbot designed by researchers to “very effectively persuade” users to stop believing unfounded conspiracy theories, made significant and long-lasting progress at changing people’s convictions, according to a study published on Thursday in the journal Science.
“The work does overturn a lot of how we thought about conspiracies,” said Gordon Pennycook, a psychology professor at Cornell University and author of the study.
Until now, conventional wisdom held that once someone fell down the conspiratorial rabbit hole, no amount of arguing or explaining would pull that person out.
- Bacon ice cream and nugget overload sees misfiring McDonald's AI withdrawn