Cryptocurrency’s Role in X
Will cryptocurrencies be included? Not likely, at least not while President Biden maintains strong pressure on crypto regulations. However, if Trump’s administration alleviates regulatory burdens, the scenario may shift. Imagine a scenario where users can directly purchase popular meme coins within the app—a possibility that Musk could easily implement.
Considering the potential enormous profits from transaction fees, Musk is likely motivated to pursue this avenue. With cryptocurrency ETFs already entering U.S. stock markets, the launch of payment services at X could be just around the corner.
Meanwhile, despite their frantic hand-waving in 2023, Musk and other technologists did not slow down to focus on safety in 2024 — quite the opposite: AI investment in 2024 outpaced anything we’ve seen before. Altman quickly returned to the helm of OpenAI, and a mass of safety researchers left the outfit in 2024 while ringing alarm bells about its dwindling safety culture.
Biden’s safety-focused AI executive order has largely fallen out of favor this year in Washington, D.C. — the incoming President-elect, Donald Trump, announced plans to repeal Biden’s order, arguing it hinders AI innovation. Andreessen says he’s been advising Trump on AI and technology in recent months, and a longtime venture capitalist at a16z, Sriram Krishnan, is now Trump’s official senior adviser on AI.
Republicans in Washington have several AI-related priorities that outrank AI doom today, according to Dean Ball, an AI-focused research fellow at George Mason University’s Mercatus Center. Those include building out data centers to power AI, using AI in the government and military, competing with China, limiting content moderation from center-left tech companies, and protecting children from AI chatbots.
“I think [the movement to prevent catastrophic AI risk] has lost ground at the federal level. At the state and local level they have also lost the one major fight they had,” said Ball in an interview with TechCrunch. Of course, he’s referring to California’s controversial AI safety bill SB 1047.
Part of the reason AI doom fell out of favor in 2024 was simply because, as AI models became more popular, we also saw how unintelligent they can be. It’s hard to imagine Google Gemini becoming Skynet when it just told you to put glue on your pizza.
But at the same time, 2024 was a year when many AI products seemed to bring concepts from science fiction to life. For the first time this year: OpenAI showed how we could talk with our phones and not through them, and Meta unveiled smart glasses with real-time visual understanding. The ideas underlying catastrophic AI risk largely stem from sci-fi films, and while there’s obviously a limit, the AI era is proving that some ideas from sci-fi may not be fictional forever.
The AI safety battle of 2024 came to a head with SB 1047, a bill supported by two highly regarded AI researchers: Geoffrey Hinton and Yoshua Bengio. The bill tried to prevent advanced AI systems from causing mass human extinction events and cyberattacks that could cause more damage than 2024’s CrowdStrike outage.
SB 1047 passed through California’s Legislature, making it all the way to Governor Gavin Newsom’s desk, where he called it a bill with “outsized impact.” The bill tried to prevent the kinds of things Musk, Altman, and many other Silicon Valley leaders warned about in 2023 when they signed those open letters on AI.
A new law in New Jersey is set to remove basic reading, writing, and math test for teacher certification to address shortages, sparking debate over education quality. Patrick and Tom break down what this means for American parents and students.
But Newsom vetoed SB 1047. In the days before his decision, he talked about AI regulation onstage in downtown San Francisco, saying: “I can’t solve for everything. What can we solve for?”
That pretty clearly sums up how many policymakers are thinking about catastrophic AI risk today. It’s just not a problem with a practical solution.
Even so, SB 1047 was flawed beyond its focus on catastrophic AI risk. The bill regulated AI models based on size, in an attempt to only regulate the largest players. However, that didn’t account for new techniques such as test-time compute or the rise of small AI models, which leading AI labs are already pivoting to. Furthermore, the bill was widely considered an assault on open source AI — and by proxy, the research world — because it would have limited firms like Meta and Mistral from releasing highly customizable frontier AI models.
But according to the bill’s author, state Senator Scott Wiener, Silicon Valley played dirty to sway public opinion about SB 1047. He previously told TechCrunch that venture capitalists from Y Combinator and a16z engaged in a propaganda campaign against the bill.
Specifically, these groups spread a claim that SB 1047 would send software developers to jail for perjury. Y Combinator asked young founders to sign a letter saying as much in June 2024. Around the same time, Andreessen Horowitz general partner Anjney Midha made a similar claim on a podcast.
The Brookings Institution labeled this as one of many misrepresentations of the bill. SB 1047 did mention how tech executives would need to submit reports identifying shortcomings of their AI models, and the bill noted that lying on a government document is perjury. However, the venture capitalists who spread these fears failed to mention that people are rarely charged for perjury, and even more rarely convicted.
YC rejected the idea that they spread misinformation, previously telling TechCrunch that SB 1047 was vague and not as concrete as Senator Wiener made it out to be.
More generally, there was a growing sentiment during the SB 1047 fight that AI doomers were not just anti-technology, but also delusional. Famed investor Vinod Khosla called Wiener clueless about the real dangers of AI at TechCrunch’s 2024 Disrupt event.
Meta’s chief AI scientist, Yann LeCun, has long opposed the ideas underlying AI doom, but became more outspoken this year.
“The idea that somehow [intelligent] systems will come up with their own goals and take over humanity is just preposterous, it’s ridiculous,” said LeCun at Davos in 2024, noting how we’re very far from developing superintelligent AI systems. “There are lots and lots of ways to build [any technology] in ways that will be dangerous, wrong, kill people, etc… But as long as there is one way to do it right, that’s all we need.”
In a recent interview, actor and activist Mark Ruffalo criticized Elon Musk, labeling him a "two-faced turncoat traitor" for abandoning his previous stance as a climate champion in favor of financial gain and power.
The fight ahead in 2025
The policymakers behind SB 1047 have hinted they may come back in 2025 with a modified bill to address long-term AI risks. One of the sponsors behind the bill, Encode, says the national attention SB 1047 drew was a positive signal.
“The AI safety movement made very encouraging progress in 2024, despite the veto of SB 1047,” said Sunny Gandhi, Encode’s vice president of Political Affairs, in an email to TechCrunch. “We are optimistic that the public’s awareness of long-term AI risks is growing and there is increasing willingness among policymakers to tackle these complex challenges.”
Gandhi says Encode expects “significant efforts” in 2025 to regulate around AI-assisted catastrophic risk, though he did not disclose any specific one.
On the opposite side, a16z general partner Martin Casado is one of the people leading the fight against regulating catastrophic AI risk. In a December op-ed on AI policy, Casado argued that we need more reasonable AI policy moving forward, declaring that “AI appears to be tremendously safe.”
“The first wave of dumb AI policy efforts is largely behind us,” said Casado in a December tweet. “Hopefully we can be smarter going forward.”
Calling AI “tremendously safe” and attempts to regulate it “dumb” is something of an oversimplification. For example, Character.AI — a startup a16z has invested in — is currently being sued and investigated over child safety concerns. In one active lawsuit, a 14-year-old Florida boy killed himself after allegedly confiding his suicidal thoughts to a Character.AI chatbot that he had romantic and sexual chats with. This case shows how our society has to prepare for new types of risks around AI that may have sounded ridiculous just a few years ago.
CAICT does not break down figures for individual brands, however Apple accounts for the majority of foreign mobile phone shipments in China with competitors like Samsung forming only a tiny part of the market.
The figures highlight the mounting pressure Apple is under in the world’s largest smartphone market as it battles rising competition from domestic brands.
Huawei, for instance — whose handset business was crippled by U.S. sanctions — saw a resurgence in the back end of 2023 and has aggressively launched high-end smartphones in China that have proved popular with local buyers.
Huawei’s growth far outstripped Apple in the third quarter of last year, according to IDC’s latest data.
Apple is hoping its iPhone 16 series, which was released in September, will help the company regain momentum in China, with the Cupertino tech giant promising a host of new AI features via its Apple Intelligence software.
However, Apple Intelligence is not yet available in China due to complex regulations around AI in the country.
In the meantime, some of Apple’s domestic rivals have been touting their own AI features that are available on devices now.
In a show of how critical China is for the iPhone giant, CEO Tim Cook has visited the country multiple times last year in an effort to shore up partnerships for Apple Intelligence with local Chinese firms.
In a bid to spur interest in the iPhone 16, Apple will begin discounts for the device on Saturday as part of a Chinese New Year holiday promotion.
"I will look forward to spending a few months handing over the reins — and to representing the company at a number of international gatherings in Q1 of this year," Clegg wrote in a memo to his staff that he shared on Facebook on Thursday.
Clegg joined the company in 2018 after a career in British politics with the Liberal Democrats party, and he helped Meta navigate incredible scrutiny, especially over the company's influence on elections and its efforts to control harmful content.
Clegg also helped steer the company through the Cambridge Analytica scandal, in which Facebook shared user data with third-party political consultants. He also represented the company in Washington and London, frequently at panels for artificial intelligence and at congressional hearings.
"My time at the company coincided with a significant resetting of the relationship between 'big tech' and the societal pressures manifested in new laws, institutions and norms affecting the sector," Clegg wrote.
In his note, Clegg said Kevin Martin, a former Federal Communications Commission chairman, would replace Kaplan as Meta's vice president of global policy. He mentioned that Kaplan would work closely with David Ginsburg, the company's vice president of global communications and public affairs.
"Nick: I'm grateful for everything you've done for Meta and the world these past seven years," Meta CEO Mark Zuckerberg said in a statement. "You've ... built a strong team to carry this work forward. I'm excited for Joel to step into this role next given his deep experience and insight leading our policy work for many years."
Crypto assets slid into the end of 2024. Although the postelection rally that sent bitcoin to new records above $100,000 had fizzled, the flagship cryptocurrency still ended the year up more than 120%. Long-term holders took some profits while others sold amid renewed uncertainty about the direction of Federal Reserve interest rate cuts in 2025.
The stringent requirements prompted pushback from major industry players. Leading brands like NordVPN, ExpressVPN, SurfShark and ProtonVPN voiced significant reservations about the rules, with several announcing plans to withdraw their server infrastructure from India.
NordVPN, ExpressVPN and SurfShark continue to maintain services for Indian customers, though they have stopped marketing their apps in the country.
Granted, this isn’t the first time Musk has set a lofty goal and missed it. It’s well-established that Musk’s pronouncements about the timing of product launches are often unrealistic at best.
And to be fair, in an interview with podcaster Lex Fridman in August, Musk said that Grok 3 would “hopefully” be available in 2024 “if we’re lucky.”
But Grok 3’s MIA status is interesting because it’s part of a growing trend.
Last year, AI startup Anthropic failed to deliver a successor to its top-of-the-line Claude 3 Opus model. Months after announcing that a next-gen model, Claude 3.5 Opus, would by released by the end of 2024, Anthropic scrapped all mention of the model from its developer documentation. (According to one report, Anthropic did finish training Claude 3.5 Opus sometime last year, but decided that releasing it didn’t make economic sense.)
Reportedly, Google and OpenAI have also suffered setbacks with their flagship models in recent months.
!summarize #cybertruck #bombing #lasvegas #motive
Hi, @taskmaster4450le,
This post has been voted on by @darkcloaks because you are an active member of the Darkcloaks gaming community.
Get started with Darkcloaks today, and follow us on Inleo for the latest updates.
Cryptocurrency’s Role in X
Will cryptocurrencies be included? Not likely, at least not while President Biden maintains strong pressure on crypto regulations. However, if Trump’s administration alleviates regulatory burdens, the scenario may shift. Imagine a scenario where users can directly purchase popular meme coins within the app—a possibility that Musk could easily implement.
Considering the potential enormous profits from transaction fees, Musk is likely motivated to pursue this avenue. With cryptocurrency ETFs already entering U.S. stock markets, the launch of payment services at X could be just around the corner.
!summarize #nygiants #briandaboll #nfl
Meanwhile, despite their frantic hand-waving in 2023, Musk and other technologists did not slow down to focus on safety in 2024 — quite the opposite: AI investment in 2024 outpaced anything we’ve seen before. Altman quickly returned to the helm of OpenAI, and a mass of safety researchers left the outfit in 2024 while ringing alarm bells about its dwindling safety culture.
Biden’s safety-focused AI executive order has largely fallen out of favor this year in Washington, D.C. — the incoming President-elect, Donald Trump, announced plans to repeal Biden’s order, arguing it hinders AI innovation. Andreessen says he’s been advising Trump on AI and technology in recent months, and a longtime venture capitalist at a16z, Sriram Krishnan, is now Trump’s official senior adviser on AI.
Republicans in Washington have several AI-related priorities that outrank AI doom today, according to Dean Ball, an AI-focused research fellow at George Mason University’s Mercatus Center. Those include building out data centers to power AI, using AI in the government and military, competing with China, limiting content moderation from center-left tech companies, and protecting children from AI chatbots.
“I think [the movement to prevent catastrophic AI risk] has lost ground at the federal level. At the state and local level they have also lost the one major fight they had,” said Ball in an interview with TechCrunch. Of course, he’s referring to California’s controversial AI safety bill SB 1047.
!summarize #teachers #standard #texas
Part of the reason AI doom fell out of favor in 2024 was simply because, as AI models became more popular, we also saw how unintelligent they can be. It’s hard to imagine Google Gemini becoming Skynet when it just told you to put glue on your pizza.
But at the same time, 2024 was a year when many AI products seemed to bring concepts from science fiction to life. For the first time this year: OpenAI showed how we could talk with our phones and not through them, and Meta unveiled smart glasses with real-time visual understanding. The ideas underlying catastrophic AI risk largely stem from sci-fi films, and while there’s obviously a limit, the AI era is proving that some ideas from sci-fi may not be fictional forever.
!summarize #markruffalo #elonmusk
The AI safety battle of 2024 came to a head with SB 1047, a bill supported by two highly regarded AI researchers: Geoffrey Hinton and Yoshua Bengio. The bill tried to prevent advanced AI systems from causing mass human extinction events and cyberattacks that could cause more damage than 2024’s CrowdStrike outage.
SB 1047 passed through California’s Legislature, making it all the way to Governor Gavin Newsom’s desk, where he called it a bill with “outsized impact.” The bill tried to prevent the kinds of things Musk, Altman, and many other Silicon Valley leaders warned about in 2023 when they signed those open letters on AI.
A new law in New Jersey is set to remove basic reading, writing, and math test for teacher certification to address shortages, sparking debate over education quality. Patrick and Tom break down what this means for American parents and students.
But Newsom vetoed SB 1047. In the days before his decision, he talked about AI regulation onstage in downtown San Francisco, saying: “I can’t solve for everything. What can we solve for?”
That pretty clearly sums up how many policymakers are thinking about catastrophic AI risk today. It’s just not a problem with a practical solution.
Even so, SB 1047 was flawed beyond its focus on catastrophic AI risk. The bill regulated AI models based on size, in an attempt to only regulate the largest players. However, that didn’t account for new techniques such as test-time compute or the rise of small AI models, which leading AI labs are already pivoting to. Furthermore, the bill was widely considered an assault on open source AI — and by proxy, the research world — because it would have limited firms like Meta and Mistral from releasing highly customizable frontier AI models.
But according to the bill’s author, state Senator Scott Wiener, Silicon Valley played dirty to sway public opinion about SB 1047. He previously told TechCrunch that venture capitalists from Y Combinator and a16z engaged in a propaganda campaign against the bill.
Specifically, these groups spread a claim that SB 1047 would send software developers to jail for perjury. Y Combinator asked young founders to sign a letter saying as much in June 2024. Around the same time, Andreessen Horowitz general partner Anjney Midha made a similar claim on a podcast.
!summarize #blockchain #marriage #network #technology
!summarize #china #economy
The Brookings Institution labeled this as one of many misrepresentations of the bill. SB 1047 did mention how tech executives would need to submit reports identifying shortcomings of their AI models, and the bill noted that lying on a government document is perjury. However, the venture capitalists who spread these fears failed to mention that people are rarely charged for perjury, and even more rarely convicted.
YC rejected the idea that they spread misinformation, previously telling TechCrunch that SB 1047 was vague and not as concrete as Senator Wiener made it out to be.
More generally, there was a growing sentiment during the SB 1047 fight that AI doomers were not just anti-technology, but also delusional. Famed investor Vinod Khosla called Wiener clueless about the real dangers of AI at TechCrunch’s 2024 Disrupt event.
!summarize #dei #dungeonanddragons #gaming
Meta’s chief AI scientist, Yann LeCun, has long opposed the ideas underlying AI doom, but became more outspoken this year.
“The idea that somehow [intelligent] systems will come up with their own goals and take over humanity is just preposterous, it’s ridiculous,” said LeCun at Davos in 2024, noting how we’re very far from developing superintelligent AI systems. “There are lots and lots of ways to build [any technology] in ways that will be dangerous, wrong, kill people, etc… But as long as there is one way to do it right, that’s all we need.”
!summarize #europe #energy #ukraine #crisis
In a recent interview, actor and activist Mark Ruffalo criticized Elon Musk, labeling him a "two-faced turncoat traitor" for abandoning his previous stance as a climate champion in favor of financial gain and power.
!summarize #florida #highrise #sinking
The fight ahead in 2025
The policymakers behind SB 1047 have hinted they may come back in 2025 with a modified bill to address long-term AI risks. One of the sponsors behind the bill, Encode, says the national attention SB 1047 drew was a positive signal.
“The AI safety movement made very encouraging progress in 2024, despite the veto of SB 1047,” said Sunny Gandhi, Encode’s vice president of Political Affairs, in an email to TechCrunch. “We are optimistic that the public’s awareness of long-term AI risks is growing and there is increasing willingness among policymakers to tackle these complex challenges.”
Gandhi says Encode expects “significant efforts” in 2025 to regulate around AI-assisted catastrophic risk, though he did not disclose any specific one.
!summarize #elonmusk #jessphillips
On the opposite side, a16z general partner Martin Casado is one of the people leading the fight against regulating catastrophic AI risk. In a December op-ed on AI policy, Casado argued that we need more reasonable AI policy moving forward, declaring that “AI appears to be tremendously safe.”
“The first wave of dumb AI policy efforts is largely behind us,” said Casado in a December tweet. “Hopefully we can be smarter going forward.”
Calling AI “tremendously safe” and attempts to regulate it “dumb” is something of an oversimplification. For example, Character.AI — a startup a16z has invested in — is currently being sued and investigated over child safety concerns. In one active lawsuit, a 14-year-old Florida boy killed himself after allegedly confiding his suicidal thoughts to a Character.AI chatbot that he had romantic and sexual chats with. This case shows how our society has to prepare for new types of risks around AI that may have sounded ridiculous just a few years ago.
!summarize #jimmybutler #miamia #heat #nba #trade
CAICT does not break down figures for individual brands, however Apple accounts for the majority of foreign mobile phone shipments in China with competitors like Samsung forming only a tiny part of the market.
The figures highlight the mounting pressure Apple is under in the world’s largest smartphone market as it battles rising competition from domestic brands.
Huawei, for instance — whose handset business was crippled by U.S. sanctions — saw a resurgence in the back end of 2023 and has aggressively launched high-end smartphones in China that have proved popular with local buyers.
!summarize #charlesbarkley #jjreddick #lalakers #nba
Huawei’s growth far outstripped Apple in the third quarter of last year, according to IDC’s latest data.
Apple is hoping its iPhone 16 series, which was released in September, will help the company regain momentum in China, with the Cupertino tech giant promising a host of new AI features via its Apple Intelligence software.
However, Apple Intelligence is not yet available in China due to complex regulations around AI in the country.
In the meantime, some of Apple’s domestic rivals have been touting their own AI features that are available on devices now.
!summarize #joerogan #drones #newjersey
In a show of how critical China is for the iPhone giant, CEO Tim Cook has visited the country multiple times last year in an effort to shore up partnerships for Apple Intelligence with local Chinese firms.
In a bid to spur interest in the iPhone 16, Apple will begin discounts for the device on Saturday as part of a Chinese New Year holiday promotion.
"I will look forward to spending a few months handing over the reins — and to representing the company at a number of international gatherings in Q1 of this year," Clegg wrote in a memo to his staff that he shared on Facebook on Thursday.
Clegg joined the company in 2018 after a career in British politics with the Liberal Democrats party, and he helped Meta navigate incredible scrutiny, especially over the company's influence on elections and its efforts to control harmful content.
!summarize #science #discipline
Clegg also helped steer the company through the Cambridge Analytica scandal, in which Facebook shared user data with third-party political consultants. He also represented the company in Washington and London, frequently at panels for artificial intelligence and at congressional hearings.
"My time at the company coincided with a significant resetting of the relationship between 'big tech' and the societal pressures manifested in new laws, institutions and norms affecting the sector," Clegg wrote.
!summarize #berniesanders #lizcheney #elonmusk
In his note, Clegg said Kevin Martin, a former Federal Communications Commission chairman, would replace Kaplan as Meta's vice president of global policy. He mentioned that Kaplan would work closely with David Ginsburg, the company's vice president of global communications and public affairs.
"Nick: I'm grateful for everything you've done for Meta and the world these past seven years," Meta CEO Mark Zuckerberg said in a statement. "You've ... built a strong team to carry this work forward. I'm excited for Joel to step into this role next given his deep experience and insight leading our policy work for many years."
!summarize #success #jackowillink #selfsabotage
!summarize #success
Crypto assets slid into the end of 2024. Although the postelection rally that sent bitcoin to new records above $100,000 had fizzled, the flagship cryptocurrency still ended the year up more than 120%. Long-term holders took some profits while others sold amid renewed uncertainty about the direction of Federal Reserve interest rate cuts in 2025.
!summarize #karlanthonytowns #nyknicks #nba #mvp
!summarize #aiagent #hubspot
!summarize #ashlyngere #actress #hollywood #biography #porn #mature #losangeles
!summarize #success #2025
!summarize #cheers #reunion #television #comedy
The stringent requirements prompted pushback from major industry players. Leading brands like NordVPN, ExpressVPN, SurfShark and ProtonVPN voiced significant reservations about the rules, with several announcing plans to withdraw their server infrastructure from India.
NordVPN, ExpressVPN and SurfShark continue to maintain services for Indian customers, though they have stopped marketing their apps in the country.
!summarize #quantumcomputing #market
!summarize #success
Granted, this isn’t the first time Musk has set a lofty goal and missed it. It’s well-established that Musk’s pronouncements about the timing of product launches are often unrealistic at best.
And to be fair, in an interview with podcaster Lex Fridman in August, Musk said that Grok 3 would “hopefully” be available in 2024 “if we’re lucky.”
But Grok 3’s MIA status is interesting because it’s part of a growing trend.
!summarize #ai #human #soul
Last year, AI startup Anthropic failed to deliver a successor to its top-of-the-line Claude 3 Opus model. Months after announcing that a next-gen model, Claude 3.5 Opus, would by released by the end of 2024, Anthropic scrapped all mention of the model from its developer documentation. (According to one report, Anthropic did finish training Claude 3.5 Opus sometime last year, but decided that releasing it didn’t make economic sense.)
Reportedly, Google and OpenAI have also suffered setbacks with their flagship models in recent months.
!summarize #china #pensions #youth #money