After this, Musk said in a podcast in August that the second patient’s BCI implant was functioning properly. Within the year, Neuralink intends to implant the device in eight additional patients and greatly expand its clinical trials. A crucial first step in confirming the technology’s efficacy and safety on a broader scale is this expansion.
Neuralink expands trials
Neuralink is expanding its BCI research beyond the US. Following approval for a feasibility study on using its N1 Implant to control an assistive robotic arm, the company announced its first international trial.
In November, Neuralink also received Health Canada’s approval for the CAN-PRIME Study, now open to Canadian nationals.
Portland General reported third quarter 2024 results on October 25th, 2024. The company reported net income of $94 million for the quarter, equal to $0.90 per diluted share on a GAAP basis, compared to $0.46 in Q3 2023. Retail energy deliveries rose 0.3% year-to-date compared to the same prior year period, but wholesale energy deliveries soared 45%. As a result, total energy deliveries rose 11%.
Company management upgraded the company’s long-term EPS growth guidance to 5% to 7% (from 4% to 6% previously), but from here we expect 4.5% earnings growth into 2029. As of October 2024, Portland General maintained that long-term guidance. Leadership also estimates that the company can grow the dividend by 5% to 7% over the long term, for a 6% mid-point, which is consistent with the trailing 10-year average dividend growth rate of 5.8%.
But in an open letter to employees and franchisees, McDonald’s senior leadership team said it remains committed to inclusion and believes a diverse workforce is a competitive advantage.
McDonald’s said it would continue to support efforts that ensure a diverse base of employees, suppliers and franchisees, but its diversity team will now be referred to as the Global Inclusion Team. The company said it would also continue to report its demographic information.
While this was a strong theoretical result, its practical implications weren’t clear, because modern LLMs are so much more complex. “It’s not easy to extend our proof,” Peng said. So his team used a different approach to study the abilities of more complicated transformers: They turned to computational complexity theory, which studies problems in terms of the resources, such as time and memory, needed to solve them.
They ended up using a well-known conjecture to show that the computational power of even multilayer transformers is limited when it comes to solving complicated compositional problems. Then, in December 2024, Peng and colleagues at the University of California, Berkeley posted a proof(opens a new tab) — without relying on computational complexity conjectures — showing that multilayer transformers indeed cannot solve certain complicated compositional tasks. Basically, some compositional problems will always be beyond the ability of transformer-based LLMs.
The US shot itself in the foot decades ago when it turned its education system into a profit center. In China today they graduate over 5 million STEM candidates a year. The US is less than 500,000. The US has not built a new university in decades. China averages 4 a year. In China a degree will cost you less than $4k, including books. You can get your doctorate for less than $10k. And if you are poor but smart enough? Its free. US students come out with 6 figure debt loads that they spend more than half their lives trying to pay off.
Altman admitted that DeepSeek has lessened OpenAI’s lead in AI, and he said he believes OpenAI has been “on the wrong side of history” when it comes to open sourcing its technologies. While OpenAI has open sourced models in the past, the company has generally favored a proprietary, closed source development approach.
“[I personally think we need to] figure out a different open source strategy,” Altman said. “Not everyone at OpenAI shares this view, and it’s also not our current highest priority … We will produce better models [going forward], but we will maintain less of a lead than we did in previous years.”
In a follow-up reply, Kevin Weil, OpenAI’s chief product officer, said that OpenAI is considering open sourcing older models that aren’t state-of-the-art anymore. “We’ll definitely think about doing more of this,” he said, without going into greater detail.
Beyond prompting OpenAI to reconsider its release philosophy, Altman said that DeepSeek has pushed the company to potentially reveal more about how its so-called reasoning models, like the o3-mini model released today, show their “thought process.” Currently, OpenAI’s models conceal their reasoning, a strategy intended to prevent competitors from scraping training data for their own models. In contrast, DeepSeek’s reasoning model, R1, shows its full chain of thought.
“We’re working on showing a bunch more than we show today — [showing the model thought process] will be very very soon,” Weil added. “TBD on all — showing all chain of thought leads to competitive distillation, but we also know people (at least power users) want it, so we’ll find the right way to balance it.”
Altman and Weil attempted to dispel rumors that ChatGPT, the chatbot platform through which OpenAI launches many of its models, would increase in price in the future. Altman said that he’d like to make ChatGPT “cheaper” over time, if feasible.
Altman previously said that OpenAI was losing money on its priciest ChatGPT plan, ChatGPT Pro, which costs $200 per month.
In a somewhat related thread, Weil said that OpenAI continues to see evidence that more compute power leads to “better” and more performant models. That’s in large part what’s necessitating projects such as Stargate, OpenAI’s recently announced massive data center project, Weil said. Serving a growing user base is fueling compute demand within OpenAI as well, he continued.
Asked about recursive self-improvement that might be enabled by these powerful models, Altman said he thinks a “fast takeoff” is more plausible than he once believed. Recursive self-improvement is a process where an AI system could improve its own intelligence and capabilities without human input.
Of course, it’s worth noting that Altman is notorious for overpromising. It wasn’t long ago that he lowered OpenAI’s bar for AGI.
One Reddit user asked whether OpenAI’s models, self-improving or not, would be used to develop destructive weapons — specifically nuclear weapons. This week, OpenAI announced a partnership with the U.S. government to give its models to the U.S. National Laboratories in part for nuclear defense research.
CPU inventor and physicist Federico Faggin PhD, together with Prof. Giacomo Mauro D'Ariano, proposes that consciousness is not an emergent property of the brain, but a fundamental aspect of reality itself: quantum fields are conscious and have free will. In this theory, our physical body is a quantum-classical ‘machine,’ operated by free will decisions of quantum fields. Faggin calls the theory 'Quantum Information Panpsychism' (QIP) and claims that it can give us testable predictions in the near future. If the theory is correct, it not only will be the most accurate theory of consciousness, it will also solve mysteries around the interpretation of quantum mechanics.
“I’ve gotten to know these scientists and they are AI experts in addition to world class researchers,” he said. “They understand the power and the limits of the models, and I don’t think there’s any chance they just YOLO some model output into a nuclear calculation. They’re smart and evidence-based and they do a lot of experimentation and data work to validate all their work.”
The OpenAI team was asked several questions of a more technical nature, like when OpenAI’s next reasoning model, o3, will be released (“more than a few weeks, less than a few months,” Altman said); when the company’s next flagship “non-reasoning” model, GPT-5, might land (“don’t have a timeline yet,” said Altman); and when OpenAI might unveil a successor to DALL-E 3, the company’s image-generating model. DALL-E 3, which was released around two years ago, has gotten rather long in the tooth. Image-generation tech has improved by leaps and bounds since DALL-E 3’s debut, and the model is no longer competitive on a number of benchmark tests.
Can AI Models Show Us How People Learn? Impossible Languages Point a Way.
ertain grammatical rules never appear in any known language. By constructing artificial languages that have these rules, linguists can use neural networks to explore how people learn.
Learning a language can’t be that hard — every baby in the world manages to do it in a few years. Figuring out how the process works is another story. Linguists have devised elaborate theories to explain it, but recent advances in machine learning have added a new wrinkle. When computer scientists began building the language models that power modern chatbots like ChatGPT, they set aside decades of research in linguistics, and their gamble seemed to pay off. But are their creations really learning?
“If your model gets larger, you can solve much harder problems,” Peng said. “But if, at the same time, you also scale up your problems, it again becomes harder for larger models.” This suggests that the transformer architecture has inherent limitations.
To be clear, this is not the end of LLMs. Wilson of NYU points out that despite such limitations, researchers are beginning to augment transformers to help them better deal with, among other problems, arithmetic. For example, Tom Goldstein(opens a new tab), a computer scientist at the University of Maryland, and his colleagues added a twist(opens a new tab) to how they presented numbers to a transformer that was being trained to add, by embedding extra “positional” information in each digit. As a result, the model could be trained on 20-digit numbers and still reliably (with 98% accuracy) add 100-digit numbers, whereas a model trained without the extra positional embedding was only about 3% accurate. “This suggests that maybe there are some basic interventions that you could do,” Wilson said. “That could really make a lot of progress on these problems without needing to rethink the whole architecture.”
Another way to overcome an LLM’s limitations, beyond just increasing the size of the model, is to provide a step-by-step solution of a problem within the prompt, a technique known as chain-of-thought prompting. Empirical studies have shown that this approach can give an LLM such as GPT-4 a newfound ability to solve more varieties of related tasks. It’s not exactly clear why, which has led many researchers to study the phenomenon. “We were curious about why it’s so powerful and why you can do so many things,” said Haotian Ye(opens a new tab), a doctoral student at Stanford University.
When Ye was still an undergraduate at Peking University, he and his colleagues modeled the behavior of transformers(opens a new tab) with and without chain-of-thought prompting. Their proof, using another branch of computer science called circuit complexity theory, established how chain-of-thought prompting essentially turns a large problem into a sequence of smaller problems, making it possible for transformers to tackle more complex compositional tasks. “That means … it can solve some problems that lie in a wider or more difficult computational class,” Ye said.
But, Ye cautions, their result does not imply that real-world models will actually solve such difficult problems, even with chain-of-thought. The work focused on what a model is theoretically capable of; the specifics of how models are trained dictate how they can come to achieve this upper bound.
Ultimately, as impressive as these results are, they don’t contradict the findings from Dziri’s and Peng’s teams. LLMs are fundamentally matching the patterns they’ve seen, and their abilities are constrained by mathematical boundaries. Embedding tricks and chain-of-thought prompting simply extends their ability to do more sophisticated pattern matching. The mathematical results imply that you can always find compositional tasks whose complexity lies beyond a given system’s abilities. Even some newer “state-space models,” which have been touted as more powerful alternatives to transformers, show similar limitations(opens a new tab).
On the one hand, these results don’t change anything for most people using these tools. “The general public doesn’t care whether it’s doing reasoning or not,” Dziri said. But for the people who build these models and try to understand their capabilities, it matters. “We have to really understand what’s going on under the hood,” she said. “If we crack how they perform a task and how they reason, we can probably fix them. But if we don’t know, that’s where it’s really hard to do anything.”
We just claimed that a lot will change in “a few years from now”. How realistic is this? Here’s the really good news: all the capabilities described above can be implemented with today’s technology.18 Not only that: we’re already doing it. We have assembled several organizations and individuals into a growing Gaia Consortium, and have of course been leveraging loads of existing components and building some of our own.
!summarize #iggypop #davidbowie #music
Hi, @taskmaster4450le,
This post has been voted on by @darkcloaks because you are an active member of the Darkcloaks gaming community.
Get started with Darkcloaks today, and follow us on Inleo for the latest updates.
!summarize #cartel #drug #unitedstates #trump
After this, Musk said in a podcast in August that the second patient’s BCI implant was functioning properly. Within the year, Neuralink intends to implant the device in eight additional patients and greatly expand its clinical trials. A crucial first step in confirming the technology’s efficacy and safety on a broader scale is this expansion.
Neuralink expands trials
Neuralink is expanding its BCI research beyond the US. Following approval for a feasibility study on using its N1 Implant to control an assistive robotic arm, the company announced its first international trial.
In November, Neuralink also received Health Canada’s approval for the CAN-PRIME Study, now open to Canadian nationals.
!summarize #jdvance #vicepresident
Portland General reported third quarter 2024 results on October 25th, 2024. The company reported net income of $94 million for the quarter, equal to $0.90 per diluted share on a GAAP basis, compared to $0.46 in Q3 2023. Retail energy deliveries rose 0.3% year-to-date compared to the same prior year period, but wholesale energy deliveries soared 45%. As a result, total energy deliveries rose 11%.
Company management upgraded the company’s long-term EPS growth guidance to 5% to 7% (from 4% to 6% previously), but from here we expect 4.5% earnings growth into 2029. As of October 2024, Portland General maintained that long-term guidance. Leadership also estimates that the company can grow the dividend by 5% to 7% over the long term, for a 6% mid-point, which is consistent with the trailing 10-year average dividend growth rate of 5.8%.
But in an open letter to employees and franchisees, McDonald’s senior leadership team said it remains committed to inclusion and believes a diverse workforce is a competitive advantage.
McDonald’s said it would continue to support efforts that ensure a diverse base of employees, suppliers and franchisees, but its diversity team will now be referred to as the Global Inclusion Team. The company said it would also continue to report its demographic information.
!summarize #ufo #boblazar #georgeknapp
!summarize #trump #timetravel #joerogan
!summarize #morphicresonance #phsyics
!summarize #relationships #dating
!summarize #history #humanity #ancient
!summarize #ev #china #automotive
!summarize #trump #europe #geopolitics
While this was a strong theoretical result, its practical implications weren’t clear, because modern LLMs are so much more complex. “It’s not easy to extend our proof,” Peng said. So his team used a different approach to study the abilities of more complicated transformers: They turned to computational complexity theory, which studies problems in terms of the resources, such as time and memory, needed to solve them.
They ended up using a well-known conjecture to show that the computational power of even multilayer transformers is limited when it comes to solving complicated compositional problems. Then, in December 2024, Peng and colleagues at the University of California, Berkeley posted a proof(opens a new tab) — without relying on computational complexity conjectures — showing that multilayer transformers indeed cannot solve certain complicated compositional tasks. Basically, some compositional problems will always be beyond the ability of transformer-based LLMs.
!summarize #jimcramer #markets #stocks #panicking
The US shot itself in the foot decades ago when it turned its education system into a profit center. In China today they graduate over 5 million STEM candidates a year. The US is less than 500,000. The US has not built a new university in decades. China averages 4 a year. In China a degree will cost you less than $4k, including books. You can get your doctorate for less than $10k. And if you are poor but smart enough? Its free. US students come out with 6 figure debt loads that they spend more than half their lives trying to pay off.
Altman admitted that DeepSeek has lessened OpenAI’s lead in AI, and he said he believes OpenAI has been “on the wrong side of history” when it comes to open sourcing its technologies. While OpenAI has open sourced models in the past, the company has generally favored a proprietary, closed source development approach.
“[I personally think we need to] figure out a different open source strategy,” Altman said. “Not everyone at OpenAI shares this view, and it’s also not our current highest priority … We will produce better models [going forward], but we will maintain less of a lead than we did in previous years.”
In a follow-up reply, Kevin Weil, OpenAI’s chief product officer, said that OpenAI is considering open sourcing older models that aren’t state-of-the-art anymore. “We’ll definitely think about doing more of this,” he said, without going into greater detail.
Beyond prompting OpenAI to reconsider its release philosophy, Altman said that DeepSeek has pushed the company to potentially reveal more about how its so-called reasoning models, like the o3-mini model released today, show their “thought process.” Currently, OpenAI’s models conceal their reasoning, a strategy intended to prevent competitors from scraping training data for their own models. In contrast, DeepSeek’s reasoning model, R1, shows its full chain of thought.
“We’re working on showing a bunch more than we show today — [showing the model thought process] will be very very soon,” Weil added. “TBD on all — showing all chain of thought leads to competitive distillation, but we also know people (at least power users) want it, so we’ll find the right way to balance it.”
Altman and Weil attempted to dispel rumors that ChatGPT, the chatbot platform through which OpenAI launches many of its models, would increase in price in the future. Altman said that he’d like to make ChatGPT “cheaper” over time, if feasible.
Altman previously said that OpenAI was losing money on its priciest ChatGPT plan, ChatGPT Pro, which costs $200 per month.
!summarize #germany #scholz #merz #election
In a somewhat related thread, Weil said that OpenAI continues to see evidence that more compute power leads to “better” and more performant models. That’s in large part what’s necessitating projects such as Stargate, OpenAI’s recently announced massive data center project, Weil said. Serving a growing user base is fueling compute demand within OpenAI as well, he continued.
Asked about recursive self-improvement that might be enabled by these powerful models, Altman said he thinks a “fast takeoff” is more plausible than he once believed. Recursive self-improvement is a process where an AI system could improve its own intelligence and capabilities without human input.
!summarize #jdvance #christian #liberals #world #democrats
Of course, it’s worth noting that Altman is notorious for overpromising. It wasn’t long ago that he lowered OpenAI’s bar for AGI.
One Reddit user asked whether OpenAI’s models, self-improving or not, would be used to develop destructive weapons — specifically nuclear weapons. This week, OpenAI announced a partnership with the U.S. government to give its models to the U.S. National Laboratories in part for nuclear defense research.
Weil said he trusted the government.
!summarize #quantum #federicfaggin #panpsychism #physcis #quanta
CPU inventor and physicist Federico Faggin PhD, together with Prof. Giacomo Mauro D'Ariano, proposes that consciousness is not an emergent property of the brain, but a fundamental aspect of reality itself: quantum fields are conscious and have free will. In this theory, our physical body is a quantum-classical ‘machine,’ operated by free will decisions of quantum fields. Faggin calls the theory 'Quantum Information Panpsychism' (QIP) and claims that it can give us testable predictions in the near future. If the theory is correct, it not only will be the most accurate theory of consciousness, it will also solve mysteries around the interpretation of quantum mechanics.
“I’ve gotten to know these scientists and they are AI experts in addition to world class researchers,” he said. “They understand the power and the limits of the models, and I don’t think there’s any chance they just YOLO some model output into a nuclear calculation. They’re smart and evidence-based and they do a lot of experimentation and data work to validate all their work.”
The OpenAI team was asked several questions of a more technical nature, like when OpenAI’s next reasoning model, o3, will be released (“more than a few weeks, less than a few months,” Altman said); when the company’s next flagship “non-reasoning” model, GPT-5, might land (“don’t have a timeline yet,” said Altman); and when OpenAI might unveil a successor to DALL-E 3, the company’s image-generating model. DALL-E 3, which was released around two years ago, has gotten rather long in the tooth. Image-generation tech has improved by leaps and bounds since DALL-E 3’s debut, and the model is no longer competitive on a number of benchmark tests.
!summarize #paradoxes #science
Can AI Models Show Us How People Learn? Impossible Languages Point a Way.
ertain grammatical rules never appear in any known language. By constructing artificial languages that have these rules, linguists can use neural networks to explore how people learn.
Learning a language can’t be that hard — every baby in the world manages to do it in a few years. Figuring out how the process works is another story. Linguists have devised elaborate theories to explain it, but recent advances in machine learning have added a new wrinkle. When computer scientists began building the language models that power modern chatbots like ChatGPT, they set aside decades of research in linguistics, and their gamble seemed to pay off. But are their creations really learning?
“If your model gets larger, you can solve much harder problems,” Peng said. “But if, at the same time, you also scale up your problems, it again becomes harder for larger models.” This suggests that the transformer architecture has inherent limitations.
!summarize #india #docks #spacex #starship #space
!summarize #rfkjr #epidemic #senate #congress
!summarize #niallferguson #universities #academia #woke
To be clear, this is not the end of LLMs. Wilson of NYU points out that despite such limitations, researchers are beginning to augment transformers to help them better deal with, among other problems, arithmetic. For example, Tom Goldstein(opens a new tab), a computer scientist at the University of Maryland, and his colleagues added a twist(opens a new tab) to how they presented numbers to a transformer that was being trained to add, by embedding extra “positional” information in each digit. As a result, the model could be trained on 20-digit numbers and still reliably (with 98% accuracy) add 100-digit numbers, whereas a model trained without the extra positional embedding was only about 3% accurate. “This suggests that maybe there are some basic interventions that you could do,” Wilson said. “That could really make a lot of progress on these problems without needing to rethink the whole architecture.”
!summarize #Rogerpenrose #self #physics #samharris
!summarize #aliens #simulation #reality #mit
Another way to overcome an LLM’s limitations, beyond just increasing the size of the model, is to provide a step-by-step solution of a problem within the prompt, a technique known as chain-of-thought prompting. Empirical studies have shown that this approach can give an LLM such as GPT-4 a newfound ability to solve more varieties of related tasks. It’s not exactly clear why, which has led many researchers to study the phenomenon. “We were curious about why it’s so powerful and why you can do so many things,” said Haotian Ye(opens a new tab), a doctoral student at Stanford University.
!summarize #bulldurham #kevincostner #hollywood #richeisen #movie
!summarize #jeffreysachs #unitedstates #leadership #geopolitics
When Ye was still an undergraduate at Peking University, he and his colleagues modeled the behavior of transformers(opens a new tab) with and without chain-of-thought prompting. Their proof, using another branch of computer science called circuit complexity theory, established how chain-of-thought prompting essentially turns a large problem into a sequence of smaller problems, making it possible for transformers to tackle more complex compositional tasks. “That means … it can solve some problems that lie in a wider or more difficult computational class,” Ye said.
But, Ye cautions, their result does not imply that real-world models will actually solve such difficult problems, even with chain-of-thought. The work focused on what a model is theoretically capable of; the specifics of how models are trained dictate how they can come to achieve this upper bound.
Ultimately, as impressive as these results are, they don’t contradict the findings from Dziri’s and Peng’s teams. LLMs are fundamentally matching the patterns they’ve seen, and their abilities are constrained by mathematical boundaries. Embedding tricks and chain-of-thought prompting simply extends their ability to do more sophisticated pattern matching. The mathematical results imply that you can always find compositional tasks whose complexity lies beyond a given system’s abilities. Even some newer “state-space models,” which have been touted as more powerful alternatives to transformers, show similar limitations(opens a new tab).
!summarize #consciousobserver #physics #materialuniverse #wavefunction
On the one hand, these results don’t change anything for most people using these tools. “The general public doesn’t care whether it’s doing reasoning or not,” Dziri said. But for the people who build these models and try to understand their capabilities, it matters. “We have to really understand what’s going on under the hood,” she said. “If we crack how they perform a task and how they reason, we can probably fix them. But if we don’t know, that’s where it’s really hard to do anything.”
We just claimed that a lot will change in “a few years from now”. How realistic is this? Here’s the really good news: all the capabilities described above can be implemented with today’s technology.18 Not only that: we’re already doing it. We have assembled several organizations and individuals into a growing Gaia Consortium, and have of course been leveraging loads of existing components and building some of our own.
!summarize #blackstone #crash #housing #realestate #wallstreet #prediction
!summarize #lucasfilm #starwars #disney #hollywood #movie
!summarize #quantuminformation #immortal #physics