You are viewing a single comment's thread from:

RE: LeoThread 2025-02-01 10:54

Here is the daily technology #threadcast for 2/1/25. The goal is to make this a technology "reddit".

Drop all question, comments, and articles relating to #technology and the future. The goal is make it a technology center.

Sort:  

Three New Qwen models today!!!

Not sure if any of them is open source, though...

The VVV token has a dynamic emission rate, so the Venice.ai company will get a percentage of the new minted VVV tokens (to spend on developing their infrastructure) depending on how much Compute Power is being spent.

Someone on Venice.ai Discord just calculated why $VVV priced at $4 is an absolute steal!!! It'll give you $18~ a year of Compute Access!!

By the way, the calculation assumes 30% APR... The current APR is 180%!!!!

Just got the o3-mini model that was released yesterday set up in my coding environment. About to give it a first spin. It's apparently off the charts for coding, significantly better than anything seen up to now. #techtips #ai #openai

I like Venice.ai but their Context Length (currently 30k for most models) is so low, that for Coding is almost useless.. They need to be twice as much, if not more. Preferably 128k.

Here's their Feedback page, the more upvotes a Feature Request has, the more likely they implement. If you care, PLEASE UPVOTE: https://veniceai.featurebase.app/p/increasing-context-length-for-coder-models

way too small

Yeah, that's why I need more people voting, I don't care if you don't use Venice.ai, I just want them to know we want this!! !LOLZ

Speaking of things I want, have you seen my threadcast for today?

What's the difference between a taxidermist and a tax collector?
A taxidermist takes only your skin.

Credit: reddit
$LOLZ on behalf of ahmadmanga

(2/4)

PLAY & EARN $DOOM

@mightpossibly, I sent you an

i tried voting, it seems it registered. YEah i saw, pretty cool! Is the knowledge base adding a manual operation? If so, perhaps you could try to automate it somehow, so that it pulls data off the blockchain automatically

Used a version of this to get the INLEO Documentations: https://github.com/ahmadmanga/gitbook_scrapper

Thanks for the vote.

As for the question, for the agent in the threadcast, adding the knowledge base is manual, but I automated parts of collecting the information for it.

single paragraphs

Hi, @taskmaster4450le,

This post has been voted on by @darkcloaks because you are an active member of the Darkcloaks gaming community.


Get started with Darkcloaks today, and follow us on Inleo for the latest updates.

After this, Musk said in a podcast in August that the second patient’s BCI implant was functioning properly. Within the year, Neuralink intends to implant the device in eight additional patients and greatly expand its clinical trials. A crucial first step in confirming the technology’s efficacy and safety on a broader scale is this expansion.

Neuralink expands trials
Neuralink is expanding its BCI research beyond the US. Following approval for a feasibility study on using its N1 Implant to control an assistive robotic arm, the company announced its first international trial.

In November, Neuralink also received Health Canada’s approval for the CAN-PRIME Study, now open to Canadian nationals.

Portland General reported third quarter 2024 results on October 25th, 2024. The company reported net income of $94 million for the quarter, equal to $0.90 per diluted share on a GAAP basis, compared to $0.46 in Q3 2023. Retail energy deliveries rose 0.3% year-to-date compared to the same prior year period, but wholesale energy deliveries soared 45%. As a result, total energy deliveries rose 11%.

Company management upgraded the company’s long-term EPS growth guidance to 5% to 7% (from 4% to 6% previously), but from here we expect 4.5% earnings growth into 2029. As of October 2024, Portland General maintained that long-term guidance. Leadership also estimates that the company can grow the dividend by 5% to 7% over the long term, for a 6% mid-point, which is consistent with the trailing 10-year average dividend growth rate of 5.8%.

But in an open letter to employees and franchisees, McDonald’s senior leadership team said it remains committed to inclusion and believes a diverse workforce is a competitive advantage.

McDonald’s said it would continue to support efforts that ensure a diverse base of employees, suppliers and franchisees, but its diversity team will now be referred to as the Global Inclusion Team. The company said it would also continue to report its demographic information.

While this was a strong theoretical result, its practical implications weren’t clear, because modern LLMs are so much more complex. “It’s not easy to extend our proof,” Peng said. So his team used a different approach to study the abilities of more complicated transformers: They turned to computational complexity theory, which studies problems in terms of the resources, such as time and memory, needed to solve them.

They ended up using a well-known conjecture to show that the computational power of even multilayer transformers is limited when it comes to solving complicated compositional problems. Then, in December 2024, Peng and colleagues at the University of California, Berkeley posted a proof(opens a new tab) — without relying on computational complexity conjectures — showing that multilayer transformers indeed cannot solve certain complicated compositional tasks. Basically, some compositional problems will always be beyond the ability of transformer-based LLMs.

The US shot itself in the foot decades ago when it turned its education system into a profit center. In China today they graduate over 5 million STEM candidates a year. The US is less than 500,000. The US has not built a new university in decades. China averages 4 a year. In China a degree will cost you less than $4k, including books. You can get your doctorate for less than $10k. And if you are poor but smart enough? Its free. US students come out with 6 figure debt loads that they spend more than half their lives trying to pay off.

Altman admitted that DeepSeek has lessened OpenAI’s lead in AI, and he said he believes OpenAI has been “on the wrong side of history” when it comes to open sourcing its technologies. While OpenAI has open sourced models in the past, the company has generally favored a proprietary, closed source development approach.

“[I personally think we need to] figure out a different open source strategy,” Altman said. “Not everyone at OpenAI shares this view, and it’s also not our current highest priority … We will produce better models [going forward], but we will maintain less of a lead than we did in previous years.”

In a follow-up reply, Kevin Weil, OpenAI’s chief product officer, said that OpenAI is considering open sourcing older models that aren’t state-of-the-art anymore. “We’ll definitely think about doing more of this,” he said, without going into greater detail.

Beyond prompting OpenAI to reconsider its release philosophy, Altman said that DeepSeek has pushed the company to potentially reveal more about how its so-called reasoning models, like the o3-mini model released today, show their “thought process.” Currently, OpenAI’s models conceal their reasoning, a strategy intended to prevent competitors from scraping training data for their own models. In contrast, DeepSeek’s reasoning model, R1, shows its full chain of thought.

“We’re working on showing a bunch more than we show today — [showing the model thought process] will be very very soon,” Weil added. “TBD on all — showing all chain of thought leads to competitive distillation, but we also know people (at least power users) want it, so we’ll find the right way to balance it.”

Altman and Weil attempted to dispel rumors that ChatGPT, the chatbot platform through which OpenAI launches many of its models, would increase in price in the future. Altman said that he’d like to make ChatGPT “cheaper” over time, if feasible.

Altman previously said that OpenAI was losing money on its priciest ChatGPT plan, ChatGPT Pro, which costs $200 per month.

In a somewhat related thread, Weil said that OpenAI continues to see evidence that more compute power leads to “better” and more performant models. That’s in large part what’s necessitating projects such as Stargate, OpenAI’s recently announced massive data center project, Weil said. Serving a growing user base is fueling compute demand within OpenAI as well, he continued.

Asked about recursive self-improvement that might be enabled by these powerful models, Altman said he thinks a “fast takeoff” is more plausible than he once believed. Recursive self-improvement is a process where an AI system could improve its own intelligence and capabilities without human input.

Of course, it’s worth noting that Altman is notorious for overpromising. It wasn’t long ago that he lowered OpenAI’s bar for AGI.

One Reddit user asked whether OpenAI’s models, self-improving or not, would be used to develop destructive weapons — specifically nuclear weapons. This week, OpenAI announced a partnership with the U.S. government to give its models to the U.S. National Laboratories in part for nuclear defense research.

Weil said he trusted the government.

CPU inventor and physicist Federico Faggin PhD, together with Prof. Giacomo Mauro D'Ariano, proposes that consciousness is not an emergent property of the brain, but a fundamental aspect of reality itself: quantum fields are conscious and have free will. In this theory, our physical body is a quantum-classical ‘machine,’ operated by free will decisions of quantum fields. Faggin calls the theory 'Quantum Information Panpsychism' (QIP) and claims that it can give us testable predictions in the near future. If the theory is correct, it not only will be the most accurate theory of consciousness, it will also solve mysteries around the interpretation of quantum mechanics.

“I’ve gotten to know these scientists and they are AI experts in addition to world class researchers,” he said. “They understand the power and the limits of the models, and I don’t think there’s any chance they just YOLO some model output into a nuclear calculation. They’re smart and evidence-based and they do a lot of experimentation and data work to validate all their work.”

The OpenAI team was asked several questions of a more technical nature, like when OpenAI’s next reasoning model, o3, will be released (“more than a few weeks, less than a few months,” Altman said); when the company’s next flagship “non-reasoning” model, GPT-5, might land (“don’t have a timeline yet,” said Altman); and when OpenAI might unveil a successor to DALL-E 3, the company’s image-generating model. DALL-E 3, which was released around two years ago, has gotten rather long in the tooth. Image-generation tech has improved by leaps and bounds since DALL-E 3’s debut, and the model is no longer competitive on a number of benchmark tests.

Can AI Models Show Us How People Learn? Impossible Languages Point a Way.

ertain grammatical rules never appear in any known language. By constructing artificial languages that have these rules, linguists can use neural networks to explore how people learn.

Learning a language can’t be that hard — every baby in the world manages to do it in a few years. Figuring out how the process works is another story. Linguists have devised elaborate theories to explain it, but recent advances in machine learning have added a new wrinkle. When computer scientists began building the language models that power modern chatbots like ChatGPT, they set aside decades of research in linguistics, and their gamble seemed to pay off. But are their creations really learning?

“If your model gets larger, you can solve much harder problems,” Peng said. “But if, at the same time, you also scale up your problems, it again becomes harder for larger models.” This suggests that the transformer architecture has inherent limitations.

To be clear, this is not the end of LLMs. Wilson of NYU points out that despite such limitations, researchers are beginning to augment transformers to help them better deal with, among other problems, arithmetic. For example, Tom Goldstein(opens a new tab), a computer scientist at the University of Maryland, and his colleagues added a twist(opens a new tab) to how they presented numbers to a transformer that was being trained to add, by embedding extra “positional” information in each digit. As a result, the model could be trained on 20-digit numbers and still reliably (with 98% accuracy) add 100-digit numbers, whereas a model trained without the extra positional embedding was only about 3% accurate. “This suggests that maybe there are some basic interventions that you could do,” Wilson said. “That could really make a lot of progress on these problems without needing to rethink the whole architecture.”

Another way to overcome an LLM’s limitations, beyond just increasing the size of the model, is to provide a step-by-step solution of a problem within the prompt, a technique known as chain-of-thought prompting. Empirical studies have shown that this approach can give an LLM such as GPT-4 a newfound ability to solve more varieties of related tasks. It’s not exactly clear why, which has led many researchers to study the phenomenon. “We were curious about why it’s so powerful and why you can do so many things,” said Haotian Ye(opens a new tab), a doctoral student at Stanford University.

When Ye was still an undergraduate at Peking University, he and his colleagues modeled the behavior of transformers(opens a new tab) with and without chain-of-thought prompting. Their proof, using another branch of computer science called circuit complexity theory, established how chain-of-thought prompting essentially turns a large problem into a sequence of smaller problems, making it possible for transformers to tackle more complex compositional tasks. “That means … it can solve some problems that lie in a wider or more difficult computational class,” Ye said.

But, Ye cautions, their result does not imply that real-world models will actually solve such difficult problems, even with chain-of-thought. The work focused on what a model is theoretically capable of; the specifics of how models are trained dictate how they can come to achieve this upper bound.

Ultimately, as impressive as these results are, they don’t contradict the findings from Dziri’s and Peng’s teams. LLMs are fundamentally matching the patterns they’ve seen, and their abilities are constrained by mathematical boundaries. Embedding tricks and chain-of-thought prompting simply extends their ability to do more sophisticated pattern matching. The mathematical results imply that you can always find compositional tasks whose complexity lies beyond a given system’s abilities. Even some newer “state-space models,” which have been touted as more powerful alternatives to transformers, show similar limitations(opens a new tab).

On the one hand, these results don’t change anything for most people using these tools. “The general public doesn’t care whether it’s doing reasoning or not,” Dziri said. But for the people who build these models and try to understand their capabilities, it matters. “We have to really understand what’s going on under the hood,” she said. “If we crack how they perform a task and how they reason, we can probably fix them. But if we don’t know, that’s where it’s really hard to do anything.”

We just claimed that a lot will change in “a few years from now”. How realistic is this? Here’s the really good news: all the capabilities described above can be implemented with today’s technology.18 Not only that: we’re already doing it. We have assembled several organizations and individuals into a growing Gaia Consortium, and have of course been leveraging loads of existing components and building some of our own.

Why Computer Scientists Consult Oracles

Hypothetical devices that can quickly and accurately answer questions have become a powerful tool in computational complexity theory.

Pose a question to a Magic 8 Ball, and it’ll answer yes, no or something annoyingly indecisive. We think of it as a kid’s toy, but theoretical computer scientists employ a similar tool. They often imagine they can consult hypothetical devices called oracles that can instantly, and correctly, answer specific questions. These fanciful thought experiments have inspired new algorithms and helped researchers map the landscape of computation.

The researchers who invoke oracles work in a subfield of computer science called computational complexity theory. They’re concerned with the inherent difficulty of problems such as determining whether a number is prime or finding the shortest path between two points in a network. Some problems are easy to solve, others seem much harder but have solutions that are easy to check, while still others are easy for quantum computers but seemingly hard for ordinary ones.

Complexity theorists want to understand whether these apparent differences in difficulty are fundamental. Is there something intrinsically hard about certain problems, or are we just not clever enough to come up with a good solution? Researchers address such questions by sorting problems into “complexity classes” — all the easy problems go in one class, for example, and all the easy-to-check problems go in another — and proving theorems about the relationships between those classes.

Unfortunately, mapping the landscape of computational difficulty has turned out to be, well, difficult. So in the mid-1970s, some researchers began to study what would happen if the rules of computation were different. That’s where oracles come in.

Like Magic 8 Balls, oracles are devices that immediately answer yes-or-no questions without revealing anything about their inner workings. Unlike Magic 8 Balls, they always say either yes or no, and they’re always correct — an advantage of being fictional. In addition, any given oracle will only answer a specific type of question, such as “Is this number prime?”

What makes these fictional devices useful for understanding the real world? In brief, they can reveal hidden connections between different complexity classes.

Take the two most famous complexity classes. There’s the class of problems that are easy to solve, which researchers call “P,” and the class of problems that are easy to check, which researchers call “NP.” Are all easy-to-check problems also easy to solve? If so, that would mean that NP would equal P, and all encryption would be easy to crack (among other consequences). Complexity theorists suspect that NP does not equal P, but they can’t prove it, even though they’ve been trying to pin down the relationship between the two classes for over 50 years.

Oracles have helped them better understand what they’re working with. Researchers have invented oracles that answer questions that help solve many different problems. In a world where every computer had a hotline to one of these oracles, all easy-to-check problems would also be easy to solve, and P would equal NP. But other, less helpful oracles have the opposite effect. In a world populated by these oracles, P and NP would be provably different.

esearchers have used this knowledge to get a better grasp on the P versus NP problem. The first attempts at determining the relationship between P and NP used an elegant trick called diagonalization that had been essential for other major results in computer science. But researchers soon realized(opens a new tab) that any proof based on diagonalization would also apply to any world where every computer can consult the same oracle. This spelled doom, as oracles change the answer to the P versus NP question. If researchers could use diagonalization to prove that P and NP are different in the real world, the same proof would imply that P and NP are different in an oracle-infused world where they’re clearly equivalent. That means any diagonalization-based solution to the P versus NP problem would be self-contradictory. Researchers concluded that they’d need new techniques to make progress.

Oracles have also been helpful in the study of quantum computing. In the 1980s and 1990s, researchers discovered ways to harness quantum physics to rapidly solve certain problems that seemed hard for ordinary “classical” computers. But did these problems just seem hard, or were they truly hard? Proving it one way or another would require radically new mathematical techniques.

Because of this, researchers have studied how quantum computers fare on problems involving oracles. These efforts can provide indirect evidence that quantum computers really are more powerful than classical ones, and they can help researchers explore qualitatively new tasks where quantum computers might excel. Sometimes, they can even have practical applications. In 1994, the applied mathematician Peter Shor was inspired by a recent oracle result(opens a new tab) to develop a fast quantum algorithm for factoring large numbers — a task whose apparent difficulty underlies the cryptographic systems that keep our online data secure. Shor’s discovery kicked off a race to build powerful quantum computers that continues to this day.

It’s hard to predict the future of complexity theory, but not every question about the trajectory of the field is equally hard to answer. Will researchers continue to consult oracles? Signs point to yes.

I come from a working class/ lower middle class background but had friends from a very posh school. What i found was that my posh friends greatest terror was of losing their status (not living up to parents/ social expectations) so would they would be happy to lie and play games, to maintain status, whereas my working class friends' familes greatest terror was of being rejected from the community so they would never play games or lie in a way that might risk them being rejected. Community was more important to them, maybe as more of their jobs and social life was communal, i.e., factories, churches, temples, and pubs. Either way powerful or powerless, fear is the controlling emotion, a terrible way to live a life.

Trump has signed an executive order that would require federal employees to work in-office five days a week, reversing a remote working trend that took off in the early stages of the COVID-19 pandemic.

Let's make America great again

while Erwin Schrödinger had an interest in Indian philosophy, especially the Upanishads, the text exaggerates the extent to which it influenced his scientific work. the claim of a "second Schrödinger equation" is false, and the connections drawn between quantum phenomena and Upanishadic concepts are largely speculative and presented as more concrete than they are. the video over-interprets the available evidence to create a stronger link than actually exists. it's important to differentiate between philosophical interests and direct scientific influence.

Peng wanted to test this hunch. His team started by studying the properties of a simple transformer, one with only a single layer, which learns to “pay attention” to the ordering and position of a sentence’s words when trying to predict the next word. (Modern LLMs have scores of such layers.) The team established a link(opens a new tab) between the complexity of the transformer layer and the “domain size,” or the number of bits required to represent the questions. By focusing on this simple model, they proved a mathematical bound. “If the total number of parameters in this one-layer transformer is less than the size of a domain, then transformers provably cannot solve the compositional task,” Peng said. In other words, an LLM with only one transformer layer was clearly and mathematically limited.

To understand why, imagine we feed an LLM two pieces of information: The father of Frédéric Chopin was Nicolas Chopin, and Nicolas Chopin was born on April 15, 1771. If we then ask it, “What is the birth date of Frédéric Chopin’s father?” the LLM would have to answer by composing, or putting together, the different facts. In effect, it would need to answer the following nested question: “What is the birth date of (Who is the father of (Frédéric Chopin)?)?” If the LLM predicts the wrong words as an answer, it’s said to have hallucinated — in this case, possibly as a result of failing to solve the compositional task.

The team observed the same pattern when it came to solving Einstein’s riddle: GPT-3 failed when asked to answer bigger versions of the puzzle compared to the ones it was fine-tuned on. “It’s mimicking something that it has seen, but it doesn’t have full understanding of it,” Dziri said.

As Dziri and her co-authors were finalizing their results, a different team was taking another approach to understanding why LLMs struggled with compositional tasks. Binghui Peng(opens a new tab), at the time a doctoral student at Columbia University, was working with one of his advisers, Christos Papadimitriou, and colleagues to understand why LLMs “hallucinate,” or generate factually incorrect information. Peng, now a postdoctoral researcher at Stanford University, suspected it was because transformers seem to lack the “capability of composition.”

Dziri’s team thought that maybe the LLMs simply hadn’t seen enough examples in their training data, so they fine-tuned GPT-3 on 1.8 million examples of multiplying two numbers. Then, when they showed it new problems, the LLM aced them — but only if they were sufficiently similar to what it had seen during training. For example, the training data included the multiplication of two three-digit numbers, and of a two-digit number with a four-digit number, but when the model was asked to multiply a four-digit number with a three-digit number, it succeeded only 2% of the time. “If they are truly reasoning and understanding certain tasks, they should get the implicit algorithm,” Dziri said. That’s not what her team saw. “That raises a lot of questions about how LLMs perform tasks and whether they’re doing true reasoning.”

Take basic multiplication. Standard LLMs, such as ChatGPT and GPT-4, fail badly at it. In early 2023 when Dziri’s team asked GPT-4 to multiply two three-digit numbers, it initially succeeded only 59% of the time. When it multiplied two four-digit numbers, accuracy fell to just 4%.

The team also tested the LLMs on tasks like Einstein’s riddle, where it also had limited success. GPT-4 always got the right answer when the puzzle involved two houses with two attributes per house. But the accuracy fell to 10% when the complexity of the puzzle increased to four houses with four attributes per house. For the original version in Life International — five houses, each with five attributes — the success rate was 0%.

The largest LLMs — OpenAI’s o1 and GPT-4, Google’s Gemini, Anthropic’s Claude — train on almost all the available data on the internet. As a result, the LLMs end up learning the syntax of, and much of the semantic knowledge in, written language. Such “pre-trained” models can be further trained, or fine-tuned, to complete sophisticated tasks far beyond simple sentence completion, such as summarizing a complex document or generating code to play a computer game. The results were so powerful that the models seemed, at times, capable of reasoning. Yet they also failed in ways both obvious and surprising.

Ironically, LLMs have only themselves to blame for this discovery of one of their limits. “The reason why we all got curious about whether they do real reasoning is because of their amazing capabilities,” Dziri said. They dazzled on tasks involving natural language, despite the seeming simplicity of their training. During the training phase, an LLM is shown a fragment of a sentence with the last word obscured (though technically it isn’t always a single word). The model predicts the missing information and then “learns” from its mistakes.

“The work is really motivated to help the community make this decision about whether transformers are really the architecture we want to embrace for universal learning,” said Andrew Wilson(opens a new tab), a machine learning expert at New York University who was not involved with this study.

Einstein’s riddle requires composing a larger solution from solutions to subproblems, which researchers call a compositional task. Dziri’s team showed that LLMs that have only been trained to predict the next word in a sequence — which is most of them — are fundamentally limited(opens a new tab) in their ability to solve compositional reasoning tasks. Other researchers have shown that transformers, the neural network architecture used by most LLMs, have hard mathematical bounds when it comes to solving such problems. Scientists have had some successes pushing transformers past these limits, but those increasingly look like short-term fixes. If so, it means there are fundamental computational caps on the abilities of these forms of artificial intelligence — which may mean it’s time to consider other approaches.

The conflict has killed more than 28,000 people, has forced millions to flee their homes and has left some families eating grass in a desperate attempt to survive as famine sweeps parts of the country.

It has been marked by gross atrocities including ethnically motivated killing and rape, according the United Nations and rights groups. The International Criminal Court said it was investigating alleged war crimes and crimes against humanity.

Wow, just wow. As and engineer, trying to solve problems that seem beyond me, at the point of mental brownout I often seek the shadow realm where after 20 - 25 minutes a solution or at least and approach to a solution is offered. This seems to come from outside of me. This video is the most lucid description of the underlying phenomenon that I've ever seen. Thank you to all that participated in bringing it to me.

Khalid al-Aleisir, minister of culture and government spokesperson, condemned the attack, saying that the casualties included many women and children. He also said the attack caused “widespread destruction to private and public properties.”

“This criminal act adds to the bloody record of this militia,” he said in a statement. “It constitutes a blatant violation of international humanitarian law.”

The conflict in Sudan started in April 2023 when simmering tensions between the leaders of the military and the RSF exploded into open fighting in the capital, Khartoum, and other cities across the sprawling northeastern African country.

Paramilitary Group Attacks an Open Market in Sudan, Killing 54 People and Wounding at Least 158

CAIRO (AP) — Sudan’s health authorities say a notorious paramilitary group fighting against the country’s military has attacked an open market in the city of Omdurman, killing 54 people.

Saturday’s attack by the Rapid Support Forces on the Sabrein Market also wounded at least 158 others, the Health Ministry said in a statement.

There was no immediate comment from the RSF.

Negotiating a phase two deal could be difficult. Hamas says it won’t release the remaining hostages without an end to the war and a full Israeli withdrawal from Gaza, after reasserting its rule over Gaza within hours of the truce.

Meanwhile, Israel says it is still committed to destroying Hamas, and a key far-right partner in Prime Minister Benjamin Netanyahu’s coalition is already calling for the war to resume after the ceasefire’s first phase.

Today’s exchange is part of a deal that paused fighting in Gaza on Jan. 19. Israeli forces have pulled back from most of Gaza, allowing hundreds of thousands of people to return to what remains of their homes and humanitarian groups to surge assistance.

It calls for Hamas to release a total of 33 hostages, including women, children, older adults and sick or wounded men, in exchange for nearly 2,000 Palestinian prisoners. Israel says Hamas has confirmed that eight of the hostages to be released in this phase are dead.

The initial Phase One ceasefire paused fighting for six weeks, calling for the sides to use that time to negotiate a second phase in which Hamas would release the remaining hostages and the ceasefire would continue indefinitely. The war could resume in early March if an agreement is not reached.

A video of their abduction by armed men showed Shiri swaddling in a blanket her two redheaded boys — Ariel, 4, and Kfir, 9 months old at the time. Kfir was the youngest of about 250 people taken captive on Oct. 7, and his plight quickly came to represent the helplessness and anger the hostage-taking stirred in Israel, where the Bibas family has become a household name.

Like Bibas, Kalderon was also captured from Kibbutz Nir Oz. His two children and ex-wife, Hadas, were also taken, but they were freed during the 2023 ceasefire.

Keith Siegel, originally from Chapel Hill, North Carolina, was taken hostage from Kibbutz Kfar Aza, along with his wife, Aviva Siegel. She was released during the 2023 ceasefire and has waged a high-profile campaign to free Keith and other hostages.

The hostages to be released, according to Hamas and Israel, are: Yarden Bibas, 35; American-Israeli Keith Siegel, 65; and French-Israeli Ofer Kalderon, 54. All were abducted during the Hamas-led attack on Israel on Oct. 7, 2023, that sparked the war.

News that Yarden Bibas, 35, is among the hostages set to be freed on Saturday brought renewed attention to the uncertain fate of the Bibas family. Hamas says his kidnapped wife and two young boys were killed in an Israeli airstrike, but Israel has not verified the claim.

KHAN YOUNIS, Gaza Strip — Hamas handed two hostages over to the Red Cross in the southern Gaza Strip on Saturday as part of its ceasefire deal with Israel.

The militants released Yarden Bibas, 35, and French-Israeli Ofer Kalderon, 54, in a highly stage-managed and orderly handover to the Red Cross. Both had been abducted during the Hamas-led attack on Israel on Oct. 7, 2023, that sparked the war.

Another hostage, American-Israeli Keith Siegel, 65, was also set to be released Saturday and was expected to be handed over to the Red Cross in Gaza City to the north.

Red Cross vehicles arrived in a location in the city of Khan Younis in the southern Gaza Strip Saturday where Hamas was set to release hostages in its ceasefire deal with Israel.

TEL AVIV, Israel — Two released hostages, Ofer Kalderon and Yarden Bibas, have arrived in Israel and are on their way to an initial reception point. Along the road leading to the military base, small groups of supporters waited for the convoys waving Israeli flags.

The two hostages were freed Saturday as part of the fourth such release in Israel's ceasefire with Hamas. One more, American-Israeli Keith Siegel, is set to be released in Gaza City later Saturday morning.

Israel and Hamas are set next week to begin negotiating a second phase of the ceasefire, which calls for releasing the remaining hostages and extending the truce indefinitely. The war could resume in early March if an agreement is not reached.

Palestinian health authorities in Gaza also announced that the long-shuttered Rafah border crossing with Egypt would reopen on Saturday for thousands of Palestinians who desperately need medical care — a breakthrough that signals the ceasefire agreement continues to gain traction.

Middle East Latest: 2 Freed Hostages Are Back in Israel

Hamas released two hostages in the southern Gaza Strip on Saturday as part of its ceasefire deal with Israel, while Palestinian authorities say Israel has agreed to release dozens of prisoners in the fourth round of exchanges during the Gaza ceasefire deal between Israel and Hamas.

The six-week phase one truce calls for the release of 33 hostages and nearly 2,000 prisoners, as well as the return of Palestinians to northern Gaza and an increase in humanitarian aid to the devastated territory.

She later posted on X: "DC panicked over a 5 pm deadline cited in a Trump admin memo. Guidance was sent to agencies to remove gender ideology-related content from their websites by 5 pm today, but admin doesn't plan to shut down websites that don't comply, McLaurine Pinover, OPM communications director said."

Trump Admin to Take Down Many Government Websites
The move was first reported by CBS News. Shortly after 5 p.m., the U.S. Census website went down for some users. It was not immediately clear how many other websites had gone down.

Trump administration officials are putting a pause on most federal government websites as of 5 p.m. EST on Friday, a source familiar with the matter said.

The move was first reported by CBS News. Shortly after 5 p.m., the U.S. Census website went down for some users. It was not immediately clear how many other websites had gone down.

CBS News senior White House reporter Jennifer Jacobs posted on X: "Guidance from the Office of Personnel Management directed all federal agencies to take steps 'no later than 5:00 EST on Wednesday' to 'take down all outward facing media (websites, social media accounts, etc.) of DEIA offices.' Some of the memo was misinterpreted, aides said."

Maybe the firings will begin! Better yet, let's take the whole government offline!

This vlogger says we shouldn’t overly praise DeepSeek, as excessive hype can harm a startup.
One tech vlogger said that while the whole internet is hyping up DeepSeek, and many investment analysts are bullish on related industries and concepts, he personally doesn’t believe DeepSeek will survive for long. He thinks it will definitely face problems.

And I do agree with him

Roberts and Johnson's clients also include U.S. Senator Rick Scott of Florida, the National Republican Congressional Committee and the National Republican Senatorial Committee, the firm said.

Musk and spokespersons for Johnson, Scott and Ramaswamy did not immediately respond to requests for comment.

Other conservative law firms with ties to Trump's inner circle include Dhillon Law Group. Trump named founder Harmeet Dhillon to head the Justice Department's civil rights division and partner David Warrington as his White House counsel.

Another is Schaerr Jaffe, which Musk has tapped to represent X users in free speech cases. Trump appointed Schaerr Jaffe partner Mark Paoletta as general counsel to the U.S. Office of Management and Budget, a key agency in the drive to shrink the federal government.

The new firm said in a statement it will represent "candidates, campaigns, and causes at the forefront of the conservative and center-right movement."

Its founder is Chris Gober, a lawyer for Musk's America PAC who also served as its former treasurer.

He is teaming up with Steve Roberts and Jessica Furst Johnson, who left their law firm Holtzman Vogel to join Gober as partners, the trio said on Friday.

Gober was not immediately available for comment. He told the New York Times, which reported the firm's launch early on Friday, that he wanted Lex Politica to become "synonymous with the conservative movement."

Lawyers for Musk, GOP Campaigns Form New Washington Firm
The firm, Lex Politica, expands an ecosystem of small conservative law firms that have gained prominence since Republican President Donald Trump's first term and strengthened their ties to Trump and his allies since his reelection.

A lawyer for billionaire Tesla CEO Elon Musk's political action committee is launching a new law firm along with two attorneys whose clients include Republicans U.S. House Speaker Mike Johnson and one-time presidential candidate Vivek Ramaswamy.

The firm, Lex Politica, expands an ecosystem of small conservative law firms that have gained prominence since Republican President Donald Trump's first term and strengthened their ties to Trump and his allies since his reelection.

The network has not commented on talks about a potential settlement, reported by The Wall Street Journal and The New York Times. Paramount executives are seeking Trump administration approval of a sale of the company to another entertainment firm, Skydance.

ABC News in December settled a defamation lawsuit by Trump over statements made by anchor George Stephanopoulos, agreeing to pay $15 million toward Trump's presidential library rather than engage in a public fight. Meta has reportedly paid $25 million to settle Trump's lawsuit against the company over its decision to suspend his social media accounts following the Jan. 6, 2021, riot at the U.S. Capitol.

The Harris interview initially drew attention because CBS News showed Harris giving completely different responses to a question posed by correspondent Bill Whitaker in clips that were aired on "Face the Nation" on Oct. 6 and the next night on "60 Minutes." The network said each clip came from a lengthy response by Harris to Whitaker's question, but they were edited to fit time constraints on both broadcasts.

In his lawsuit, filed in Texas on Nov. 1, Trump charged it was deceptive editing designed to benefit Harris and constituted "partisan and unlawful acts of voter interference."

Trump, who turned down a request to be interviewed by "60 Minutes" during the campaign, has continued his fight despite winning the election less than a week after the lawsuit was filed.

There is no laws of physics preventing Optimus working for you to do all chores, laundry, rake leave, do the groceries, handle deliveries, assemble IKEA furniture, be a better than average; masseur, restaurant chef, teatcher for your kids, electrtian, plumber, lawyer for advice, tax accountant doing your taxes, personal assistant/secretary, nurse and doctor, personal trainer, etc, etc, etc, all of which you can have immense benefit from or rent it to others for a task. Eventually

The network said Friday that it was compelled by Brendan Carr, Trump's appointee as FCC chairman, to turn over the transcripts and camera feeds of the interview for a parallel investigation by the commission. "60 Minutes" has resisted releasing transcripts for this and all of its interviews, to avoid second-guessing of its editing process.

The case, particularly a potential settlement, is being closely watched by advocates for press freedom and by journalists within CBS, whose lawyers called Trump's lawsuit "completely without merit" and promised to vigorously fight it after it was filed.

CBS to Give FCC '60 Minutes' Harris Interview Transcripts
CBS says it will turn over an unedited transcript of its October interview with former Vice President Kamala Harris to the FCC, part of President Donald Trump's ongoing fight with the network over how it handled a story about his opponent.

CBS says it will turn over an unedited transcript of its October interview with former Vice President Kamala Harris to the Federal Communications Commission, part of President Donald Trump's ongoing fight with the network over how it handled a story about his opponent.

Trump sued CBS for $10 million over the "60 Minutes" interview, claiming it was deceptively edited to make Harris look good. Published reports said that CBS's parent company, Paramount, has been talking to Trump's lawyers about a settlement.

On Jan. 12, the American Alliance for Equal Rights sued McDonald's over the HACER program. The alliance, which challenges programs that use race or ethnicity as a factor in their decisions, is run by Edward Blum, the conservative activist who also successfully challenged affirmative action programs in college admissions.

On Friday, McDonald’s said it reached a settlement with the American Alliance for Equal Rights that will allow it to consider this year’s applicants. The Chicago company said more than 3,000 students have already applied for this year's scholarships.

McDonald’s said the program will now be open to any student who can demonstrate an impact on or commitment to the Latino community. Applicants no longer need to have at least one Latino parent.

McDonald's Settles Lawsuit Over Latino Scholarship Program
McDonald's said Friday it is changing - but not eliminating - a scholarship program for Latino students after it was sued by a group that opposes affirmative action.

McDonald’s said Friday it is changing — but not eliminating — a scholarship program for Latino students after it was sued by a group that opposes affirmative action.

McDonald’s HACER National Scholarship Program, which was founded in 1985, awards college scholarships to students with at least one Latino parent. The program has awarded more than $33 million in scholarships to more than 17,000 students.

It was not immediately clear how many agreements would be affected by the new policy, which refers to them as "lame-duck collective bargaining agreements."

Collective bargaining agreements are deals between unions and their employees that outline working conditions, pay, and other policies.

The move comes as Trump embarks on a massive makeover of the U.S. government, firing and sidelining hundreds of civil servants in his first steps toward downsizing the bureaucracy and installing more loyalists.

The memo cites a Department of Education collective bargaining agreement reached three days before Trump took office that "generally prohibits the agency from returning remote employees to their offices."

Trump to Cancel Federal Workers' Recent Union Deals
It was not immediately clear how many agreements would be affected by the new policy, which refers to them as lame-duck collective bargaining agreements.

Donald Trump said on Friday that any collective bargaining agreements reached with federal workers within 30 days of his inauguration will not be approved, the latest salvo in the president's bid to remake the federal workforce.

In a memo addressed to the heads of all executive departments and agencies, Trump said former President Joe Biden's administration purposefully finalized collective bargaining agreements with federal employees in its final days "in an effort to harm my administration by extending its wasteful and failing policies beyond its time in office."

General Mills has grown its earnings-per-share at a 5.2% average annual rate in the last decade. In recent years, this decelerated, but the company has accelerated again since the onset of the pandemic. We expect approximately 5.0% annual earnings-per-share growth over the next five years, mostly thanks to Blue Buffalo. Earnings-per-share will also benefit from a decent amount of share repurchases.

GIS stock currently yields 4.0%.

In mid-December, General Mills reported (12/18/24) results for Q2-2025. Net sales and organic sales grew 2% and 1%, respectively, over last year’s quarter thanks to higher volumes, which more than offset a slight decline in the price due to the composition of the product mix.

This marked an improvement vs. the marginal decline reported in the previous quarter. Gross margin expanded from 34.4% to 36.9%, as cost savings offset input inflation. Adjusted earnings-per-share grew 12%, from $1.25 to $1.40, and exceeded the analysts’ consensus by $0.18.

General Mills (GIS)

General Mills is a packaged food giant, with more than 100 brands and operations in more than 100 countries. General Mills has not cut its dividend for 124 consecutive years. It has returned to growth mode in the last five years, mostly thanks to the acquisition of Blue Buffalo and the pandemic, which greatly increased food consumption at home.

On September 12th, 2024, General Mills announced that it agreed to sell its North American yogurt business for $2.1 billion in cash. The proceeds will be used for share repurchases. The sale of this business, which generated 8% of total sales last year, is expected to reduce earnings-per-share by ~3% in the first year after the sale.

Asset managers like T. Rowe have low variable costs. As a result, higher revenues, driven primarily by increasing assets under management, allow for margin expansion and attractive earnings growth rates. Assets under management grow in two basic ways: increased contributions and higher underlying asset values. While asset values are finicky, the trend is upward over the long-term.

On the contribution side, T. Rowe Price’s strong past performance is a key selling point and could attract customers going forward. In addition, T. Rowe has another EPS growth lever in the way of share repurchases.

TROW stock currently yields 4.3%.

During the quarter, assets under management (AUM) improved $61.8 billion, or 3.9%, to $1.63 trillion. Market appreciation of $74 billion was partially offset by $12.2 billion of net client outflows. Operating expenses of $1.17 billion increased 7.6% year-over-year, but just 0.3% quarter-over-quarter.

T. Rowe Price’s earnings, as well as its dividends, have grown substantially over the last decade. While earnings did drop during the last financial crisis, the overall record has been solid. Since 2014, the company has grown earnings-per-share by an average compound rate of 5.9% per annum. Moreover, the company performed well in 2020.

T. Rowe Price Group (TROW)

T. Rowe Price Group is one of the largest publicly traded asset managers. The company provides a broad array of mutual funds, sub-advisory services, and separate account management for individual and institutional investors, retirement plans and financial intermediaries. The firm had assets under management of more than $1.6 trillion as of September 30th, 2024.

T. Rowe Price is a Dividend Aristocrat, having increased its dividend for 38 years in a row.

On November 1st, 2024, T. Rowe Price reported third quarter results for the period September June 30th, 2024. For the quarter, revenue grew 6.9% to $1.79 billion, though this was $60 million below estimates. Adjusted earnings-per-share of $2.57 compared favorably to $2.17 in the prior year and was $0.22 more than expected.

We expect that Portland General will generate this earnings growth through increased annual energy deliveries, as a result of commercial growth, and strong growth in industrial energy demand due to customer expansion. Rate increases, customer additions, and completion of construction projects will all further fuel Portland General’s earnings growth.

Portland General Electric (POR)

Portland General Electric is an electric utility based in Portland, Oregon, providing electricity to more than 930,000 customers in 51 cities. The company owns or contracts more than 3.5 gigawatts of energy generation, between gas, coal, wind & solar, and hydro.

POR has about 3,000 full-time employees. In 2023, the corporation generated $2.9 billion in revenue. The utility company is diversified by customer, with 37% of retail deliveries going to residential customers, 34% to commercial clients, and 29% to industrial clients. The company is forecasting that 80% of its power delivered to customers by 2030 will be carbon free, and 100% carbon free by 2040. On April 19th, 2024, Portland General Electric announced a 5% increase in the quarterly dividend to $0.50 per share.

3 Conservative Income Stocks for Retirees

Retirees who purchase stocks for investment income often have to settle for low yielding stocks. The S&P 500 Index yields just 1.3% right now, on average.

However, there are plenty of high dividend stocks that have strong current yields above 4%, and also have secure dividend payouts that can grow over time.

The following 3 dividend stocks have high dividend yields and safe payouts, which makes them attractive for retirement income.

Shell is writing off a nearly $1 billion investment. It announced its decision on Thursday, as it reported a 16% decline in full-year earnings of $23.7 billion from $28.3 billion. Most of its business is oil and gas.

Danish wind developer Orsted was close to beginning work on two offshore wind farms in New Jersey but scrapped the project in Oct. 2023 after deciding it would not be economical.

A lot of clean energy is cheap now, but offshore wind is still among the most expensive. That can make these projects less attractive to investors, absent strong policy support, said Coco Zhang, vice president for environmental, social and governance research at ING.

“The potential uncertainty that the executive order has brought to the market, it cannot be ignored,” she said.

The Biden administration approved plans to build the Atlantic Shores project in two phases in October, but construction has not begun. Oliver Metcalfe, head of wind research at BloombergNEF, said the partners are facing significant uncertainty about their lease, and other developers are watching what happens with Atlantic Shores closely. “We’re in uncertain territory here,” he added.

Offshore wind foes, who are particularly vocal and well-organized in New Jersey, celebrated Shell’s withdrawal. Republican Rep. Jeff Van Drew, of New Jersey, helped the Trump team draft the executive order. He said Shell’s decision is a “big win” for New Jersey’s coastline and economy but “this fight is not over.”

Robin Shaffer, president of Protect Our Coast NJ, said that without Shell's financial backing, it appears the project is “dead in the water.”

Reports indicate that Canadian neurosurgeons seek regulatory approval to recruit six patients with paralysis for voluntary implantation of the BCI device. A specialized 1.8-ton robot will implant 64 electrodes, each with 16 contacts, into the hand-motor areas of patients’ brains. These electrodes will transmit neural activity, allowing users to control connected devices through thought alone.

Meanwhile, more details on the Convoy study, aimed at further refining BCI-controlled robotics, are awaited. Neuralink has stated that additional updates will be shared in due course as research progresses.

It's unclear whether Shell's decision kills the project — partner EDF-RE Offshore Development says it remains committed to Atlantic Shores.

On his first day in office, Trump signed an executive order singling out offshore wind for contempt with a temporary halt on all lease sales in federal waters and a pause on approvals, permits and loans. Perhaps most of interest to Shell, the order directs administration officials to review existing offshore wind energy leases and identify any legal reasons to terminate them.

Large offshore wind farms have been making electricity for three decades in Europe, and more recently in Asia. They are considered by experts to be an essential part of addressing climate change because they can take the place of fossil fuel plants, if paired with battery storage. New Jersey has set a goal of generating 100% of its energy from clean sources by 2035.

In Win for Trump, Shell Quits N.J. Offshore Wind Farm

In the first serious fallout from President Donald Trump's early actions against offshore wind power, oil and gas giant Shell is walking away from a major project off the coast of New Jersey.

Shell told The Associated Press it is writing off the project, citing increased competition, delays and a changing market.

“Naturally we also take regulatory context into consideration,” spokesperson Natalie Gunnell said in an email.

Shell co-owns the large Atlantic Shores project, which has most of its permits and would generate enough power for 1 million homes if both of two phases were completed. That’s enough for one-third of New Jersey households.

With the use of this implant, individuals who are quadriplegic can use their minds to operate external equipment such as laptops and smartphones. The BCI implant eliminates the need for wires and any physical movement.

People with disabilities, such as those who have amyotrophic lateral sclerosis (ALS) or cervical spinal cord injuries that limit or eliminate their ability to use both hands, have been invited to join Musk’s startup’s patient registry.

In January 2024, 30-year-old Noland Arbaugh, who suffered an accident in 2016 that left him paralyzed from the shoulders down, became the first individual to get Neuralink’s brain implant. Despite some difficulties along the way, Neuralink was able to modify the implant’s algorithm to improve its sensitivity and get it working again.

"We have always upheld the rule of law, and acted decisively and firmly against individuals and companies that flout the rules."

In its third-quarter results published in November, Nvidia said that Singapore accounts for almost 22% of its revenue but added that: "most shipments associated with Singapore revenue were to locations other than Singapore and shipments to Singapore were insignificant."

MTI cited Nvidia's comments in its Saturday statement and said the chipmaker said there was no reason to believe that DeepSeek had obtained any export-controlled products via Singapore.

"Singapore is an international business hub. Major US and European companies have significant operations here. Nvidia has explained that many of these customers use their business entities in Singapore to purchase chips for products destined for the US and other Western countries," MTI added.

Other TVA employees are also earning top dollar — with chief financial officer John Thomas raking in $6.3 million a year, chief operating officer Don Moul earning $5 million, general counsel David Fountain making $3.3 million and chief nuclear officer Tim Rausch taking home $3.3 million, according to a November report from the Knoxville News Sentinel.

Criticism of TVA salaries has been bipartisan, with Rep. Steve Cohen (D-Tenn.) telling The Post in 2020 that Lyash’s pay was “out of line for a public agency.”

“Many [TVA executives] make over a million dollars running an agency set up to render energy and aid to a poor region in our country that still suffers economically in many areas,” Cohen said.

Neuralink brain implant user controls robotic arm, writes ‘Convoy’ in new video

Neuralink’s N1 chip eliminates the need for wires or any physical movement, enabling quadriplegic individuals to operate gadgets using their minds.

Elon Musk’s Neuralink suggests a human patient may have successfully used its brain chip to control a robotic arm. A video posted by the neurotechnology firm shows a robotic arm writing ‘Convoy’ on a whiteboard, referencing the company’s study on brain-controlled assistive robotics.

The demonstration highlights progress in Neuralink’s N1 chip, designed to restore mobility and communication for individuals with disabilities. While details remain limited, the clip hints at potential breakthroughs in brain-machine interface technology.

#neualink #roboticarm #brain

That is awesome, I'm going to go down that rabbit hole now. Right after I finish the solar minimum thing.

Brain-powered robotics
The new 30-second clip reveals little, including the operator’s identity. Neuralink’s X post shares the video along with a heart, robot arm, and pen emojis, hinting at brain-controlled robotic advancements. The demonstration is part of its CONVOY feasibility exercise announced in November, which includes participants in its ongoing PRIME (Precise Robotically Implanted Brain-Computer Interface) study.

Some observers noted the significance of Neuralink’s demonstration, suggesting the patient was controlling the robotic arm using only their mind, without a joystick or muscle sensor. Musk acknowledged the interpretation as accurate, according to a report by PCMag.

A tiny, aesthetically undetectable brain-computer interface (BCI) implant is inserted into the area of the brain responsible for movement planning as part of Neuralink’s PRIME project.

McDonald’s said it will extend the deadline for this year’s scholarships from Feb. 6 to March 6 to accommodate any new applicants.

Blum applauded the settlement Friday.

“McDonald’s has wisely agreed to end this discriminatory scholarship program,” Blum said. “It is a shame that over many years thousands of students were shut out of this program because they were not the preferred ethnicity.”

McDonald's is one of many companies that have halted some diversity efforts in the wake of the 2023 U.S. Supreme Court ruling that banned race as a factor in college admissions.

Earlier in January, McDonald's said it would retire specific goals for achieving diversity at senior leadership levels. It also ended a program that encouraged its suppliers to develop diversity training and to increase the number of minority group members represented within their own leadership ranks.

Sam Altman: OpenAI has been on the 'wrong side of history' concerning open source

In a Reddit AMA, OpenAI CEO Sam Altman said that he believes OpenAI has been 'on the wrong side of history' concerning its open source approach.

To cap off a day of product releases, OpenAI researchers, engineers, and executives, including OpenAI CEO Sam Altman, answered questions in a wide-ranging Reddit AMA on Friday.

OpenAI finds itself in a bit of a precarious position. It’s battling the perception that it’s ceding ground in the AI race to Chinese companies like DeepSeek, which OpenAI alleges might’ve stolen its IP. The ChatGPT maker has been trying to shore up its relationship with Washington and simultaneously pursue an ambitious data center project, while reportedly laying groundwork for one of the largest financing rounds in history.

#openai #samaltman #opensource

Trump Tariff Promise Snuffs Out Bitcoin Rally for Second Consecutive Day

Looks like Trump’s tariff drama is shaking up Bitcoin again. For the second day in a row, BTC shot up past $106K, only to tumble after the White House doubled down on the 25% tariffs. #Bitcoin #Crypto

On Thursday, BTC was soaring until Trump promised tariffs on Mexico & Canada. Prices dropped 2%, stocks wobbled, but still ended green. Then came Friday’s Reuters report hinting at a delay…

That optimism didn’t last. The White House quickly shut it down, calling the report false. Trump’s press secretary confirmed the tariffs (including 10% on China) go live tomorrow. BTC tanked again.

Bitcoin, which almost hit $109K earlier, plunged below $103K, down 2.3% in 24 hours. Meanwhile, the CoinDesk 20 Index slipped 1.3%, with ETH managing a slight 1.2% gain.

Traditional stocks stayed positive but lost momentum. Bitcoin, though? It’s clearly not a fan of trade war threats. Will it recover, or are we in for more turbulence?

They say Deepseek is cheap then I read that it cost billions. The plan is to break the US market. What is true?

From VencieAi

Web 4.0 refers to the next generation of the World Wide Web, which is expected to be more intelligent, autonomous, and decentralized than its predecessors. While Web 3.0 focused on decentralization and blockchain technology, Web 4.0 aims to integrate artificial intelligence (AI), Internet of Things (IoT), and other emerging technologies to create a more immersive and interactive experience.

Here's a detailed explanation of Web 4.0 and its comparison to Web 3.0:

Web 3.0:
Web 3.0, also known as the Decentralized Web, emerged as a response to the centralized nature of Web 2.0. It emphasizes decentralization, blockchain technology, and token-based economies. The key features of Web 3.0 include:

Give it a couple of years, we're already on the brink of Web 4.0, and Web 3.0 did not even reach mainstream. !LOLZ !HOPE

I wish I could clean mirrors for a living.
It's just something I can see myself doing.

Credit: reddit
$LOLZ on behalf of ahmadmanga

(1/4)
Delegate Hive Tokens to Farm $LOLZ and earn 110% Rewards. Learn more.@taskmaster4450le, I sent you an

Examples of Web 4.0 applications include AI-powered virtual assistants, autonomous vehicles, smart homes, and immersive gaming experiences.

Comparison between Web 3.0 and Web 4.0:

  1. Decentralization: Both Web 3 .0andWeb4 .02emphasize decentralization , butWeb40 takes it further by integrating AIand IoTto create moreautonomous systems .
    2 . Intelligence: We b40 incorporates AI algorithms t o enable machines t o learn , reason , an d interactwith humansin amore natural way .
    3 . Interactivity: We b40 focuses on creating immersive experiences through XR an d IoT , while We b30 focuses on decentralized networks an d token -based economies .
    4 . Scalability: We b40 is expected t o be more scalable than We b30 due t o the use o f quantum computing an d autonomous systems .

Web 4.0:
Web 4.0 takes the principles of decentralization and blockchain technology from Web 3.0 and adds new layers of intelligence, autonomy, and interactivity. The key features of Web 4.0 include:

  1. Artificial intelligence (AI): Integration of AI algorithms that enable machines to learn, reason, and interact with humans in a more natural way.
  2. Internet of Things (IoT): Seamless integration with physical devices and sensors that collect and exchange data.
  3. Extended Reality (XR): Immersive experiences that combine virtual reality (VR), augmented reality (AR), and mixed reality (MR) to create new forms of interaction.
  4. Autonomous systems: Self-governing systems that can adapt to changing conditions without human intervention.
  5. Quantum computing: The use of quantum computers to solve complex problems that are currently unsolvable with traditional computers.
  1. Decentralized networks: Blockchain-based networks that allow for peer-to-peer transactions and data storage.
  2. Token-based economies: Cryptocurrencies and tokens that enable new forms of economic interaction.
  3. Smart contracts: Self-executing contracts with the terms of the agreement written directly into code.

Examples of Web 3.0 applications include decentralized finance (DeFi) platforms, non-fungible token (NFT) marketplaces, and decentralized social media platforms.

Chatbot Software Begins to Face Fundamental Limitations

Recent results show that large language models struggle with compositional tasks, suggesting a hard limit to their abilities.

On December 17, 1962, Life International published a logic puzzle(opens a new tab) consisting of 15 sentences describing five houses on a street. Each sentence was a clue, such as “The Englishman lives in the red house” or “Milk is drunk in the middle house.” Each house was a different color, with inhabitants of different nationalities, who owned different pets, and so on. The story’s headline asked: “Who Owns the Zebra?” Problems like this one have proved to be a measure of the abilities — limitations, actually — of today’s machine learning models.

#chatbot #software #ai

Also known as Einstein’s puzzle or riddle (likely an apocryphal attribution), the problem tests a certain kind of multistep reasoning. Nouha Dziri(opens a new tab), a research scientist at the Allen Institute for AI, and her colleagues recently set transformer-based large language models (LLMs), such as ChatGPT, to work on such tasks — and largely found them wanting. “They might not be able to reason beyond what they have seen during the training data for hard tasks,” Dziri said. “Or at least they do an approximation, and that approximation can be wrong.”

Singapore says U.S. firms should comply with export controls following DeepSeek chip questions

Questions have been raised over the provenance of the semiconductors used to build DeepSeek's AI model, given U.S. export restrictions.

Singapore's Ministry of Trade and Industry (MTI) said in a statement Saturday that it expects U.S. companies to comply with U.S. export controls and local laws, following questions over the chips used by China's DeepSeek to produce its AI model.

Markets were rocked this week after DeepSeek claimed its large language model outperforms OpenAI's but cost a fraction of the price to train. However, questions were soon raised over the provenance of the semiconductors used to build DeepSeek's R1 reasoning model given U.S. restrictions on exporting advanced AI chips in China.

#singapore #deepseek #nvidia #chips #exports #controls

Bloomberg on Friday reported that U.S. officials were investigating whether DeepSeek had bought advanced semiconductors from chipmaker Nvidia via third parties in Singapore.

A Nvidia spokesperson told CNBC Monday that the chips used by DeepSeek were fully export-compliant. DeepSeek was not immediately available for comment when contacted by CNBC.

"We expect US companies, like Nvidia, to comply with US export controls and our domestic legislation. Our customs and law enforcement agencies will continue to work closely with their US counterparts," MTI said in its statement.