LLaMA and ChatGPT 敢 會曉 寫 台語文 ?
Because of this uncertainty, Chinese researchers have been working on melding GPUs from different brands into one training cluster. By doing so, the institutions could combine their limited stocks of sanctioned high-end, high-performance chips, like the Nvidia A100, with less powerful but readily available GPUs, like Huawei’s Ascend 910B or the aforementioned Nvidia H20. This technique could help them combat the high-end GPU shortage within China, although it has historically come with large drops in efficiency. Sponsored Links Amazon's Worst Nightmare: Thousands Canceling Prime for This Clever Hack (It's Unbelievable)Online Shopping Tools
The elements of this story—Altman’s uncanny ability to ascend and persuade people to cede power to him—have shown up throughout his career. After co-chairing OpenAI with Elon Musk, Altman sparred with him for the title of CEO; Altman won. And in the span of just a few hours yesterday, the public learned that Mira Murati, OpenAI’s chief technology officer and the most important leader at the company besides Altman, is departing along with two other crucial executives: Bob McGrew, the chief research officer, and Barret Zoph, a vice president of research who was instrumental in launching ChatGPT and GPT-4o, the “omni” model that, during its reveal, sounded uncannily like Scarlett Johansson. To top it off, Reuters, The Wall Street Journal, and Bloomberg reported that OpenAI is planning to turn away from its nonprofit roots and become a for-profit enterprise that could be valued at $150 billion. Altman reportedly could receive 7 percent equity in the new arrangement—or the equivalent of $10.5 billion if the valuation pans out.I never trusted Sam Altman. I trust OpenAI's overhyped CEO even less now.
Setting aside the flurry of OpenAI headlines today, once you get past the circus around Altman ' like the idea that this non-technical Silicon Valley wunderkind who's consistently failed upward is somehow the 'Oppenheimer of our Age' - I'm of the opinion that he's actually not all that hard to understand. In fact, I think he's mostly a kind of Rorschach test.
So, ignore me if you want, but you would do well to listen to people like computer scientist Grady Booch, who wrote in response to Altman’s latest bluster on X/Twitter: “I am so freaking tired of all the AI hype: it has no basis in reality and serves only to inflate valuations, inflame the public, (garner) headlines, and distract from the real work going on in computing.”
'Chief executive Sam Altman will also receive equity for the first time in the for-profit company, which could be worth $150 billion after the restructuring as it also tries to remove the cap on returns for investors,' sources told Reuters.
Downplaying AI pitfallsThe techlords play other subtle games, too. When Sam Altman and I testified before Congress, we raised our right hands and swore to tell the whole truth, but when Senator John Kennedy (R-LA) asked him about his finances, Altman said, 'I have no equity in OpenAI,' elaborating that 'I'm doing this 'cause I love it.' He probably does mostly work for the love of the job (and the power that goes with it) rather than the cash. But he also left out something important: he owns stock in Y Combinator (where he used to be president), and Y Combinator owns stock in OpenAI (where he is CEO), an indirect stake that is likely worth tens of millions of dollars. Altman had to have known this. It later came out that Altman also owns OpenAI’s venture capital fund, and didn't mention that either. By leaving out these facts, he passed himself off as more noble than he really is. What the big tech leaders really mean to say is that the harms from AI will be difficult to prove (after all, we can’t even track who is generating misinformation with deliberately unregulated open-source software)—and that they don’t want to be held responsible for whatever their software might do. All of it, every word, should be regarded with the same skepticism we accord cigarette manufacturers.
If you have not yet listened to any AI tunes, Yan's piece is a great place to start. You might be surprised at how decent an AI app’s moody ballad about human obsolescence is, or its heavy-metal take on the same.Granted, AI songs sound a little rigid, nondescript or “unnaturally regular,” as one music professor told Yan; in other words, exactly like the stuff that cleans up at Eurovision.
But just as conquering Eurovision can be a fabulous start for an artist (see: Celine Dion, Olivia Newton-John, ABBA), so can an effort by an AI collaborator. Yan listens in on musician Eric Lyon’s experiment developing an AI song about the atomic bomb into a fully realized, very human musical project that debuted in Mexico in February.
The most triumphant example is the AI completion of Beethoven’s unfinished 10th Symphony: 40,000 notes developed from the original 200. The orchestral recording Yan includes is lush and sounds right in line with Beethoven’s actual offerings.
AI allows anyone to produce professional-sounding music in virtually any genre. Its use is surging in music and has caught the attention of major industry groups. In June, the Recording Industry Association of America, Universal Music Group, Sony Music Entertainment and Warner Music Group teamed up to sue popular AI music apps Suno and Udio, accusing them of copyright infringement.
Lawsuits like this could help safeguard the rights of musicians and record labels — though their effect could take years. In the meantime, AI could well present more opportunities than challenges for musicians. “Most musicians I know aren’t afraid of their art being replaced by AI,” said Sum Patten, a creative director at the agency Glow and an adviser to the AI 2030 initiative, which promotes responsible AI practice. “It’s pretty clear at this point that AI won’t be able to replicate the magic that a skilled and seasoned musician can accomplish.”
AI-generated songs lack the fluidity of music created by humans. But musicians who experiment with AI can give themselves an edge in an evolving industry. AI can expedite their own creative work and provide inspiration.
To understand how, consider the way Suno generates music.
Since 2023, 40% of Alibaba’s deals in China and 30% of Tencent’s have targeted AI startupsSince 2023, investors—including the country’s biggest tech companies—have valued at least six China-based startups developing large language models at more than $1 billion each. Most of these unicorns, dubbed China’s six “Little Artificial-Intelligence Dragons,” have received capital from Alibaba and Tencent.
Subscribers to X Premium, which grants access to Grok, have been posting everything from Barack Obama doing cocaine to 'Old Donald' with a pregnant woman who (vaguely) resembles Kamala Harris to 'Old Donald' and Harris pointing guns. With US elections approaching and X already under scrutiny from regulators in Europe, it’s a recipe for a new fight over the risks of generative AI.
Some X users posted examples of images generated by Grok-2 including one in which former President 'Old Donald' is firing two handguns, one of Vice President Kamala Harris in military gear standing in the Gaza Strip and another of a boxing bout between the two main presidential candidates.
The rise of AI has sparked a panic about computers gaining power over humankind. But the real threat comes from falling for the hype.Businesses have been eager to rush aboard the hype train. Some of the world’s largest companies, including Microsoft, Meta and Alphabet, are throwing their full weight behind AI. On top of the billions spent by big tech, funding for AI startups hit nearly $50bn in 2023.
Computers might in fact approach what we call thinking, but they don’t dream, or want, or desire, and this matters more than AI’s boosters let on.
Some Silicon Valley businessmen have taken tech solutionism to an extreme. It is these AI accelerationists whose ideas are the most terrifying.
Artificial intelligence may keep growing in scope, power and capability, but the assumptions underlying our faith in it – that, so to speak, it might bring us closer to God – may only lead us further away.
OpenAI has had a system for watermarking ChatGPT-created text and a tool to detect the watermark ready for about a year, reports The Wall Street Journal. But the company is divided internally over whether to release it. On one hand, it seems like the responsible thing to do; on the other, it could hurt its bottom line.
While many American companies are worried that A.I. technologies could accelerate the spread of disinformation or cause other serious harm, Chinese companies are more willing to release their technologies to consumers or even share the underlying software code with other businesses and software developers. This kind of sharing of computer code, called open source, allows others to more quickly build and distribute their own products using the same technologies.The White House has instituted a trade embargo designed to prevent Chinese companies from using the most powerful versions of computer chips that are essential to building artificial intelligence. A group of lawmakers has introduced a bill that would make it easier for the White House to control the export of A.I. software built in the United States. Others are trying to limit the progress of open-source technologies that have helped fuel the rise of similar systems in China.
Kuaishou released its video generator, Kling, in China more than a month ago and to users worldwide on Wednesday. Just before Kling’s arrival, 01.AI, a start-up co-founded by Kai-Fu Lee, an investor and technologist who helped build Chinese offices for both Google and Microsoft, released chatbot technology that scored nearly as well as the leading American technologies on common benchmark tests that rate the performance of the world's chatbots.
But Chinese tech companies face a major constraint on the development of their A.I. systems: compliance with Beijing's strict censorship regime, which extends to generative A.I. technologies.
"We sense that Wall Street is growing increasingly skeptical."
Republican delegates meeting this week in Milwaukee pledged to roll back federal restrictions on artificial intelligence, while other allies of former president 'Old Donald' laid plans for 'Manhattan Projects' to boost AI's 'Oppenheimer moment' autonomous weapons enter the battlefield military AI.
More than 450 bills involving AI have been active in legislative sessions in state capitals across the nation this year, according to TechNet, an industry trade association whose members include OpenAI and Google. More than 45 are pending in California, though many have been abandoned or held up in committee.
Realizing that the timing for testing GPT-4o would be tight, the representative said, he spoke with company leaders, including Chief Technology Officer Mira Murati, in April and they agreed to a “fallback plan.” If the evaluations turned up anything alarming, the company would launch an earlier iteration of GPT-4o that the team had already tested.“I definitely don’t think we skirted on [the tests],” the representative said. But the process was intense, he acknowledged. “After that, we said, ‘Let’s not do it again.’”
OpenAI’s upcoming ban on application programming interface (API) access to its artificial intelligence (AI) models in China doesn’t apply to Microsoft Azure’s customers in the country.Azure operates in China via a joint venture and has made it clear in public statements that the AI models are available to its customers in the country, Seeking Alpha reported Monday (July 8), citing a paywalled article from The Information.
“We are taking additional steps to block API traffic from regions where we do not support access to OpenAI’s services,” an OpenAI spokesperson said in the report.
It was reported in January that the Biden administration proposed stringent regulations on major cloud service providers, including Microsoft, to compel these companies to identify and actively investigate foreign clients engaged in the development of AI applications on their platforms.
Microsoft started investing in OpenAI, the creator of popular AI chatbot ChatGPT, back in 2019. Meanwhile, Nvidia boss Jensen Huang pushed his company towards AI chip development many years before generative AI exploded onto the scene.Tech analyst Paolo Pescatore agrees that the pressure is on for AI firms to deliver on their promises. “The bubble will burst the moment one of the giants fails to show any meaningful growth from AI,” he says.
But he does not believe that is going to happen any time soon.
“Everyone is still jostling for position, and all companies are pinning their strategies on AI,” he adds.
“All the players are ramping up their activities, increasing spend and claiming early successes.”
A surprising saviorWhat nobody could have predicted was the identity of its savior. Not Nvidia—that’s a symptom, not the cause. I’m talking, reluctantly, about Sam Altman, the creepy, perpetually fired former head of YCombinator who in late 2022, seemingly out of nowhere, announced that his company, OpenAI, had invented the future.
And they’re right to do so because, although I hate to say it, the AI hype is real. Microsoft knows it, Google knows it and the markets know it. Even Apple—a company famed for its contrarian refusal to chase trends—has been forced to bend its knee.
Meta AI researcher feels OpenAI's Sora is a terrible idea, says it is doomed to fail
It is precisely because of AI’s tremendous capacity for evil that makes it so dangerous, and combined with its great relevance to society, it is necessary to create, launch and enforce strict ethical standards and preemptive measures as a means to combat artificial intelligence.A number of anti-dark AI initiatives have been set in motion by the United Nations, World Economic Forum, the UNICRI Centre for AI and Robotics (United Nations Interregional Crime and Justice Research Institute), the G20 and OECD and the White House.
Generative AI makes things up. It can't distinguish between fact and fiction. It asserts its fabrications with confident authority.It's far more troubling when the technology moves into medicine, finance, law and other realms where "oops, sorry" doesn't cut it.
Just days after ChatGPT's release, computer scientists Arvind Narayanan and Sayash Kapoor declared, "ChatGPT is a bulls--t generator." The same concept has now inspired a research paper titled "ChatGPT is Bulls--t."
If either of us were in the industry, or if we were working at one of these companies, it would be much harder for us to talk about the harmful impacts of AI. It would be much harder to get outside the existential-risk bubble and get a realistic view.
- This Chatbot Pulls People Away From Conspiracy Theories
In a new study, many people doubted or abandoned false beliefs after a short conversation with the DebunkBot.DebunkBot, an A.I. chatbot designed by researchers to “very effectively persuade” users to stop believing unfounded conspiracy theories, made significant and long-lasting progress at changing people’s convictions, according to a study published on Thursday in the journal Science.
“The work does overturn a lot of how we thought about conspiracies,” said Gordon Pennycook, a psychology professor at Cornell University and author of the study.
Until now, conventional wisdom held that once someone fell down the conspiratorial rabbit hole, no amount of arguing or explaining would pull that person out.
- Bacon ice cream and nugget overload sees misfiring McDonald's AI withdrawn
- OpenAi adds formal 'Old Donald' appointed former NSA director to its board
The appointment of the career Army officer, who was the longest-serving leader of U.S. Cybercom, comes as OpenAI tries to quell criticism of its security practices — including from some of the company’s current and former employees who allege the ChatGPT-maker prioritizes profits over the safety of its products. The company is under increasing scrutiny following the exodus of several key employees and a public letter that called for sweeping changes to its practices.Security researchers have also pointed out that chatbots are vulnerable to “prompt injection” attacks, in which hackers can break in to a company’s computer system through a chatbot that is hooked up to its internal databases. Some companies also ban their employees from using ChatGPT out of concern that OpenAI may not be able to properly protect sensitive information fed into its chatbot.