You are viewing a single comment's thread from:

RE: LeoThread 2024-11-02 11:20

Here is the daily technology #threadcast for 11/2/24. The goal is to make this a technology "reddit".

Drop all question, comments, and articles relating to #technology and the future. The goal is make it a technology center.

Sort:  

Could the Less Privileged Utilize Chatbots as Free Tutors?

Do you envision a future where the less privileged in society, in terms of financial backgrounds, would be able to utilize chatbots as free tutors? All they'd need is access to the Internet and a mobile phone.

#EdTech #Accessibility #askleo

I have no doubt that models like this will be common on the streets with us in the future.

#technology #robot #ai

Perhaps. It will be interesting to see how humans embrace robots. We might not require something that looks like us.

There will be millionaires out there wanting "a clone" in the form of a robot.

This would even be interesting for some country leaders to avoid suffering a terrorist attack.

Do you think the Second Amendment needs to be reinterpreted in light of technological advancements?

#askleo #technology

With today’s technology, is it possible to fake a successful moon landing, even though we haven’t yet safely landed humans on Mars?

#askleo

Faking a moon landing would demand an army’s worth of highly skilled people, tons of specialized equipment, and endless funding. By comparison, actually going to the Moon requires a smaller team, fewer resources, and a bit less money to pull off successfully.

It’s clear which option makes more sense.

Is it possible to recover lost coins from the blockchain?

Say the world would explode if we don't...

#askleo

I saw this somewhere in the deep ocean of the Internet 😂

Bro forgot to copy the answer.

#aijokes

BTC long term always wins. But you can't guarantee you'll live long enough to enjoy the cash out, so when is it ok to pull out?

I have a question, When exactly will we see Chatgpt making the next chatgpt on its own and what job should I transition to assuming I was a programmer seeing AI about to take over?

Intelligent individuals who value their education will utilize AI as a learning aid rather than a means of cheating. They’ll engage with tools like ChatGPT to act as a tutor, helping them grasp concepts rather than simply providing answers.

#Education #LearningTools

Looking at our current educational System, the students and Artificial intelligence, I would argue that it's better to incorporate AI in a strategic way than to outright ban students for using it.

Encouraging students to use it in an ethical manner is better, cause you can't stop the revolution, you can only "tweak" it to suite your organization

What's your take...

#ai #education

The education system is going to get whacked at some point. It will be slow since it is heavily regulated.

They will have to change it when we no longer need the qualities of working in a factory. The focus should change to create modern humans instead of obidient workers.

The skills required in the workforce today are much different than a number of decades ago. The education system did not change. Of course, AI is going to disrupt that completely. It will start with the upper level education i.e. colleges. Then it will move down.

The reason it hasnt changed yet is because the people in powernhave not yet really admitted that we are not manufactoring countries anymore.
And now Im talking about the west.

We have seen some changes, introducing programming in elementqry school :)

Yeah, slow but most definitely would happen....

I also see kids not needing to go to school (buildings) of only Chatbots could have good memory and include syllabus in the future...

A kid could just tell his Open AI or any AI company for that matter.

"I want to be a physicist"
Or

"I want to be the next Elon Musk, Train me"

More importantly is the idea of XAI being trained so it can replace Elon Musk.

Think about the impact of that.

That's 1 step further thinking

As fast as things are going, it is important to think in that way. It is also why posting as much as possible is vital. The models are accumulating all that data. It will help to create utility for us in the future.

Indeed, I get your point... How fast it will get there would depend and how much data we input into the models....

I guess it's more important to feed decentralized AI models as we wouldn't want to give that power to Centralized organizations.

Does it make sense?

Hi, @taskmaster4450le,

This post has been voted on by @darkcloaks because you are an active member of the Darkcloaks gaming community.


Get started with Darkcloaks today, and follow us on Inleo for the latest updates.

Nvidia replacing fallen icon in Dow stock index after 25-year run

Nvidia has emerged as a cornerstone of the global semiconductor industry, and shares have risen more than two-fold this year alone.

Intel will lose its spot in the Dow Jones Industrial Average after a 25-year run to Nvidia, S&P Dow Jones Indices said Friday, the latest blow to the struggling chipmaker that was among the first two technology firms to be included in the blue-chip index.

#intel
#nvidia #dow #Amarkets

Once the dominant force in chipmaking, Intel has in recent years ceded its manufacturing edge to rival TSMC and missed out on the generative artificial intelligence boom after missteps including passing on an investment in ChatGPT-owner OpenAI.

Intel’s shares have declined 54% this year, making it the worst performer on the index and leaving it with the lowest stock price on the price-weighted Dow.

The stock fell about 1% to $22.79 in extended trading on Friday, while Nvidia was up more than 2% to $139.17.

This development comes a day after Intel expressed optimism about the future of its PC and server businesses, projecting current-quarter revenue above estimates but warning that it had “a lot of work to do.”

“Losing the status of Dow Jones inclusion would be another reputational blow for Intel, as it grapples with a painful transformation and loss of confidence,” said Susannah Streeter, head of money and markets at Hargreaves Lansdown.

“It would also mean that Intel is not included in exchange-traded funds (ETFs) which track the index, which could impact the share price further.”

Launched in 1968, the Silicon Valley pioneer sold memory chips before switching to processors that helped launch the personal computer industry.

‘Nudify’ Deepfake Bots on Telegram Are Up to 4 Million Monthly Users

Deepfake Telegram bots generating nude images of women, including minors, have amassed millions of users, raising serious concerns about privacy, consent, and the potential for widespread exploitation.

Back in 2020, deepfake expert Henry Ajder identified a bot on the messaging app Telegram that “undressed” photos and generated over 100,000 explicit images—some of which were of minors. This, of course, raised serious concerns about the dark side of AI.

Things haven’t gotten much better.

#technology #sex #deepfake #telegram #bots

This week, WIRED reviewed activity on Telegram and discovered at least 50 bots that apparently create similar explicit photos and videos.

Using AI technology similar to the DeepNude app, the bots generate fake nude images of women from regular photos. Users can simply upload any clothed photo of any woman and receive a fabricated nude image or fake sexual video in return, often within minutes.

According to WIRED, these bots have over 4 million monthly users combined. That’s likely just a portion of what’s actually out there, and that’s… incredibly concerning.

“We’re talking about a significant, orders-of-magnitude increase in the number of people who are clearly actively using and creating this kind of content,” said Ajder, in the new article.

“It is really concerning that these tools—which are really ruining lives and creating a very nightmarish scenario primarily for young girls and for women—are still so easy to access and to find on the surface web, on one of the biggest apps in the world.”

Back in 2020, one cybersecurity firm claimed to find images of 100,000 women. That’s basically one woman per user. And now there’s 4 million users.

“These types of fake images can harm a person’s health and well-being by causing psychological trauma and feelings of humiliation, fear, embarrassment, and shame,” added Emma Pickering, head of technology-facilitated abuse and economic empowerment at Refuge, the largest UK domestic abuse organization for women.

Telegram, which recently apologized to South Korea for its deepfake porn issue, doesn’t exactly have a reputation for addressing exploitation on its platform.

“Telegram provides you with the search functionality, so it allows you to identify communities, chats, and bots,” said Ajder. “It provides the bot-hosting functionality, so it’s somewhere that provides the tooling in effect. Then it’s also the place where you can share it and actually execute the harm in terms of the end result.”

Chipmaking giant Nvidia in talks with Elon Musk over investing in xAI: source

Chipmaking giant Nvidia is in talks with Elon Musk about investing in his fast-growing artificial intelligence startup xAI, a source close to the situation said.

Chipmaking giant Nvidia is in talks with Elon Musk about investing in his fast-growing artificial intelligence startup xAI, a source close to the situation said.

#xai #nvidia #grok #semiconductor #technology #llm

XAI — which powers the snarky Grok chatbot on Musk’s X social network — is in talks with some investors about raising several billion dollars at a roughly $40 billion valuation, the WSJ reported this week.

The Information reported he was talking to strategic investors — meaning tech companies as opposed to investment firms — but didn’t offer any names.

Venture firms including Sequoia Capital, Andreessen Horowitz and Vy Capital have been included in the latest funding talks, the tech news site reported.

Nvidia — which under CEO Jensen Huang last week surpassed Apple to become the world’s most valuable company with a market capitalization of more than $3.5 trillion — declined to comment when contacted by The Post.

The company had strongly denied similar rumors in the spring.

Musk is expecting in January to hold a major new fund-raising round that could value xAI at as much as $75 billion, two sources said.

It’s not unusual for chipmakers like Nvidia to co-invest with their customers on projects, according to industry insiders.

One Nvidia analyst who asked not to be named said xAI’s competitors would still buy Nvidia’s chips even if it invests in xAI.

AI Sexbots Are On The Rise. Should We Regulate Them?

In an almost prophetic way, Spike Jonze’s 2013 film “Her” introduced audiences to a world where artificial intelligence blurs the lines between technology and human intimacy. Now, as AI sexbots are becoming a more prevalent part of our real-world lives, researchers are concerned over how they will impact the future and the ethics of human sexuality.

#sexuality #sexbots #technology #ai #regulation

The protagonist in “Her,” Theodore, develops an emotional bond with an AI operating system. The human and the machine begin to explore themes of love and loneliness, and more importantly, can human connection bridge the gap into the technological? This cinematic exploration of AI-human relationships has become quite relevant as the AI sexbot industry begins to take shape.

In a recent article published in The Conversation by the University of Sydney’s Raffaele Ciriello, the burgeoning AI sexbot industry is poised to revolutionize human intimacy; and there are some serious risks associated with this new kind of ‘love.’

Virtual companions and physical robots that mimic human interactions are already out there. Companies like Replika have already capitalized on this trend by providing users with customizable digital partners. Claiming to have 30 million users, Replika allows individuals to create AI companions tailored to their preferences, engaging in intimate conversations and role-playing scenarios.

This rise in “digisexuality” reflects a growing demand for AI-driven relationships, and Ciriello is worried about the future of human romance and connection.

“The availability of AI-driven relationships is likely to usher in all manner of ethically dubious behaviour from users who won’t have to face the real-world consequences,” he writes.

One significant issue is the ability of users to manipulate their AI partners without facing real-world consequences. Don’t like your digital partner’s opinion on a sensitive issue or topic? No need to engage in a difficult albeit developmentally healthy conversation. Just switch it off.

From Wikipedia:

Sex robots or sexbots are anthropomorphic robotic sex dolls that have a humanoid form, human-like movement or behavior, and some degree of artificial intelligence. As of 2018, although elaborately instrumented sex dolls have been created by a number of inventors, no fully animated sex robots yet exist. Simple devices have been created which can speak, make facial expressions, or respond to touch.

There is controversy as to whether developing them would be morally justifiable. In 2015, Robot ethicist Kathleen Richardson called for a ban on the creation of anthropomorphic sex robots with concerns about normalizing relationships with machines and reinforcing female dehumanization. Questions about their ethics, effects, and possible legal regulations have been discussed since then.

The Making of Sexbots (NSFW)

Matt McMullen is changing the world of sex toys with his hyperrealistic sex doll.

It too was a nightmare to install and turned out to be like an X-rated Tickle Me Elmo. Instead of “That tickles!” the doll said things like “Ow!” and “Oh, that feels good” or simply moaned. “We did that for a while and it was cool—some people loved it,” Matt recalls halfheartedly. Others didn’t think it was worth the $1,500. “But more people said, ‘Well, I don’t know if I want her to talk.’ I kind of like that it’s just a doll, and that’s kind of where sometimes I feel I am. You start adding aLL these other things, it’s not really just a doll anymore.”

#technology #sexbots #Robots #sex #society

The thought of getting back into robotics nOW is exciting but also intimidating and anxiety-inducing: “I feel like 10 years ago when I was doing this, I was completely content. I made dolls and I made them as beautiful as I could and it was a very free feeling. …. I guess in a sense it makes you long for the simplicity of what used to be.”

It’s Alive!
At the end of The Stepford Wives, the evil, Dr. Frankenstein-like head of the Men’s Association—nicknamed “Diz” because he once worked in Disney’s animatronics department—responds to one of the last utterances of Katherine Ross’s doomed character, Joanna. “Why? Because we can,” Diz informs her. “We found a way of doing it that’s just perfect, perfect for us and perfect for you…. See, think of it the other way around: wouldn’t you like some perfect stud waiting on you around the house? Praising you? Servicing you? Whispering how your sagging flesh was beautiful, no matter how you looked?” Then the sexbot, an exact replica of Joanna except for its black, doll-like eyes and gravity-defying breasts, tightens a stocking and strangles her with it.

Matt calls it a very entertaining movie and concept. And creepy? “Yeah, that’s creepy. But our goal would never be to do that, and whatever amount of technology I incorporate into our dolls as we go forward into the future will be geared at the simple goal of enhancing that interaction, not taking away from it. I would never see that being a threat to an organic woman at all.” Besides, females might have some options by the time fembots are commonplace:

“They’re probably going to make robotic manbots, and don’t fool yourself: women will be in line, too,” he says. Like the Jude Law character “Gigolo Joe,” in A.I.? “Oh, sure. If you make a robot that is Johnny Depp-ish enough or whatever character at the time—of course they’ll be open to it!

“Across-the-board, human sexuality is expanding into these other avenues and frontiers,” he says. “We like to experience different types and flavors of sex, and that is our nature. And so I don’t think necessarily this is something that needs to be a high level of concern.

There’s this big gap between what people fantasize about and what’s possible even in the next decade. You know we’re not quite there. When we’re able to build a starship Enterprise, we’ll have these kinds of robots that people fantasize about, but there’s going to be a lot of steps between here and there.”

Is animating dolls or giving them emotional intelligence the greatest desire?

“Well, the idea, the goal, the fantasy there, is to bring her to life, ultimately,” he replies. But he admits that, given the choice between a beautiful woman and an animated doll, there are some who would still choose the latter. “They have a fetish for the doll. It has nothing to do with dehumanizing anyone. They have a fetish for this doll to be animated, and it has nothing to do with possessing them or controlling them. I mean, there are people out there who have sex with their car. There are people who have sexual fetishes about items of clothing or pieces of furniture—that’s out there and doesn’t dehumanize anyone. That’s just their thing, man. So again: relax.”

So women shouldn’t be worried about being replaced by synthetic versions of themselves?

“No. Nor should men be worried that they’ll be replaced by dildos.”

Don’t Feel Sorry for David Mills
Inside a booth at Red Lobster in downtown Huntington, David Mills is looking around for a waitress who used to be a stripper. One thing he will say for the Huntington area is there are some pretty good strip joints. People come from Charleston and all over. Every couple of months Mills goes to either Lady Godiva’s or Southern X-Posure, where the strippers are fully nude onstage and give wonderful private lap dances.

“The only problem I have is there are a lot of fat strippers and they have tattoos,” he says. “I mean, that just doesn’t do it for me, though usually in an evening they’ll have one or two that look really good and kind of classy-looking.”

He says he isn’t drinking tonight. Gets too carried away. Usually he will buy one 22-ounce bottle. “And that’s all I have. But if I have like a 12-pack, I drink until I throw up, so I rarely drink.”

Was he being serious about his offer to wash Taffy so I could test her out? “Yeah—I mean, that’s fine with me,” he replies. “That’s perfectly fine. There is absolutely no possibility of catching anything at all. You can do it NOW or later when you come back. I was not kidding.”

The only downside to Taffy is her weight, but “you can’t demand a life-size doll that looks and feels exactly like a woman and expect the doll to weigh 10 pounds and throw it over your shoulder.” Another issue is that dolls assume the ambient temperature. He is very interested to learn that McMullen is finalizing a design for a remote-control internal heating system so his customers won’t have to use an electric blanket.

A Miami AI company’s CEO will pay $64,000 to settle accusations of lying to investors

The SEC says there were “misrepresentations” and “misused funds” by the CEO.

The CEO of an AI robotics company that she ran out of a Miami apartment was better at hiding truths about the company’s progress, herself and where investor money got spent than guiding the company to produce the service robot it promised investors.

#ai #technology #sec #crime #robotics #fraud

The first prototype AI robot Destiny Robotics presented to the public was “a far cry from the socially intelligent “humanoid” robot represented to investors,” the SEC stated. Securities and Exchange Commission Complaint vs. Destiny Robotics and Megi Kavtaradze

The CEO of an AI robotics company that she ran out of a Miami apartment was better at hiding truths about the company’s progress, herself and where investor money got spent than guiding the company to produce the service robot it promised investors.

At least, that’s what an Securities and Exchange Commission complaint against Destiny Robotics and CEO Megi Kavtaradze claimed. Kavtaradze, legally, neither admits nor denies the accusations. And she refused comment when reached Sunday by phone.

However, money talks. The case settlement approved Thursday by Miami federal court Judge K. Michael Moore says Kavtaradze agreed to pay the SEC a total of $64,384: $12,990 disgorgement, representing how much she profited from the “misrepresentations” in the SEC complaint; interest of $1,394; and a civil penalty of $50,000.

Though also listed as a defendant in the civil action, Destiny Robotics is a defunct company, so it faced no penalty or disgorgement.

The SEC complaint said in raising $141,000 from investors through crowdfunding, Kavtaradze and Destiny “made material misrepresentations” about:

▪ what Destiny Robotics products could do;

▪ when they would be released;

▪ touted “the completion of the hologram prototype while omitting that it had been abandoned;”

▪ “a major investor’s personal and business relationship with Kavtaradze while using his endorsement and role as stockholder;

▪ Kavtaradze being “an experienced executive from a technology company;”

▪ Kavtaradze misappropriating some investor money for personal use,” including “meals, travel and application fees for MBA programs.”

Michael Hiltzik: These Apple researchers just showed that AI bots can’t think, and possibly never will

See if you can solve this arithmetic problem: Oliver picks 44 kiwis on Friday.

Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many kiwis does Oliver have?

#ai #technology #apple #thinking

If you answered "190," congratulations: You did as well as the average grade school kid by getting it right. (Friday's 44 plus Saturday's 58 plus Sunday's 44 multiplied by 2, or 88, equals 190.)

You also did better than more than 20 state-of-the-art artificial intelligence models tested by an AI research team at Apple. The AI bots, they found, consistently got it wrong.

The Apple team found "catastrophic performance drops" by those models when they tried to parse simple mathematical problems written in essay form. In this example, the systems tasked with the question often didn't understand that the size of the kiwis have nothing to do with the number of kiwis Oliver has. Some, consequently, subtracted the five undersized kiwis from the total and answered "185."

Human schoolchildren, the researchers posited, are much better at detecting the difference between relevant information and inconsequential curveballs.

The Apple findings were published earlier this month in a technical paper that has attracted widespread attention in AI labs and the lay press, not only because the results are well-documented, but also because the researchers work for the nation's leading high-tech consumer company - and one that has just rolled out a suite of purported AI features for iPhone users.

"The fact that Apple did this has gotten a lot of attention, but nobody should be surprised at the results," says Gary Marcus, a critic of how AI systems have been marketed as reliably, well, "intelligent."

Microsoft and a16z set aside differences, join hands in plea against AI regulation

Two of the biggest forces in two deeply intertwined tech ecosystems — large incumbents and startups —

Two of the biggest forces in two deeply intertwined tech ecosystems — large incumbents and startups — have taken a break from counting their money to jointly plead that the government cease and desist from even pondering regulations that might affect their financial interests, or as they like to call it, innovation.

#microsoft #a16z #ai #technology #regulation

“Our two companies might not agree on everything, but this is not about our differences,” writes this group of vastly disparate perspectives and interests: Founding a16z partners Marc Andreessen and Ben Horowitz, and Microsoft CEO Satya Nadella and President/Chief Legal Officer Brad Smith. A truly intersectional assemblage, representing both big business and big money.

But it’s the little guys they’re supposedly looking out for. That is, all the companies that would have been affected by the latest attempt at regulatory overreach: SB 1047.

Imagine being charged for improper open model disclosure! a16z general partner Anjney Midha called it a “regressive tax” on startups and “blatant regulatory capture” by the Big Tech companies that could, unlike Midha and his impoverished colleagues, afford the lawyers necessary to comply.

California Senate Bill 1047

Enrolled September 03, 2024
Passed IN Senate August 29, 2024
Passed IN Assembly August 28, 2024
Amended IN Assembly August 22, 2024
Amended IN Assembly August 19, 2024
Amended IN Assembly July 03, 2024
Amended IN Assembly June 20, 2024
Amended IN Assembly June 05, 2024
Amended IN Senate May 16, 2024
Amended IN Senate April 30, 2024
Amended IN Senate April 16, 2024
Amended IN Senate April 08, 2024
Amended IN Senate March 20, 2024

Introduced by Senator Wiener
(Coauthors: Senators Roth, Rubio, and Stern)

February 07, 2024

An act to add Chapter 22.6 (commencing with Section 22602) to Division 8 of the business and Professions Code, and to add Sections 11547.6 and 11547.6.1 to the government Code, relating to artificial intelligence.

LEGISLATIVE COUNSEL'S DIGEST

SB 1047, Wiener. Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
Existing law requires the Secretary of Government Operations to develop a coordinated plan to, among other things, investigate the feasibility of, and obstacles to, developing standards and technologies for state departments to determine digital content provenance. For the purpose of informing that coordinated plan, existing law requires the secretary to evaluate, among other things, the impact of the proliferation of deepfakes,

defined to mean audio or visual content that has been generated or manipulated by artificial intelligence that would falsely appear to be authentic or truthful and that features depictions of people appearing to say or do things they did not say or do without their consent, on state government, California-based businesses, and residents of the state.
This bill would enact the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act to, among other things, require that a developer, before beginning to initially train a covered model, as defined, comply with various requirements, including implementing the capability to promptly enact a full shutdown, as defined, and implement a written and separate safety and security protocol, as specified.

The bill would require a developer to retain an unredacted copy of the safety and security protocol for as long as the covered model is made available for commercial, public, or foreseeably public use plus 5 years, including records and dates of any updates or revisions and would require a developer to grant to the Attorney General access to the unredacted safety and security protocol.

The bill would prohibit a developer from using a covered model or covered model derivative for a purpose not exclusively related to the training or reasonable evaluation of the covered model or compliance with state or federal law or making a covered model or a covered model derivative available for commercial or public, or foreseeably public, use, if there is an unreasonable risk that the covered model or covered model derivative will cause or materially enable a critical harm, as defined.

The bill would require a developer, beginning January 1, 2026, to annually retain a third-party auditor to perform an independent audit of compliance with those provisions, as prescribed. The bill would require the auditor to produce an audit report, as prescribed, and would require a developer to retain an unredacted copy of the audit report for as long as the covered model is made available for commercial, public, or foreseeably public use plus 5 years. The bill would require a developer to grant to the Attorney General access to the unredacted auditor’s report upon request.

The bill would exempt from disclosure under the California Public Records Act the safety and security protocol and the auditor’s report described above.
This bill would require a developer of a covered model to submit to the Attorney General a statement of compliance with these provisions, as specified. The bill would also require a developer of a covered model to report each artificial intelligence safety incident affecting the covered model or any covered model derivative controlled by the developer to the Attorney General, as prescribed.

This bill would require a person that operates a computing cluster, as defined, to implement written policies and procedures to do certain things when a customer utilizes compute resources that would be sufficient to train a covered model, including assess whether a prospective customer intends to utilize the computing cluster to train a covered model.
This bill would authorize the Attorney General to bring a civil action, as provided.

The bill would also provide for whistleblower protections, including by prohibiting a developer of a covered model or a contractor or subcontractor of the developer from preventing an employee from disclosing information, or retaliating against an employee for disclosing information, to the Attorney General or Labor Commissioner if the employee has reasonable cause to believe the information indicates the developer is out of compliance with certain requirements or that the covered model poses an unreasonable risk of critical harm.

This bill would create the Board of Frontier Models within the Government Operations Agency, independent of the Department of technology, and provide for the board’s membership. The bill would require the Government Operations Agency to, on or before January 1, 2027, and annually thereafter, issue regulations to, among other things, update the definition of a “covered model,” as provided, and would require the regulations to be approved by the board before taking effect.

This bill would establish in the Government Operations Agency a consortium required to develop a framework for the creation of a public cloud computing cluster to be known as “CalCompute” that advances the development and deployment of artificial intelligence that is safe, ethical, equitable, and sustainable by, among other things, fostering research and innovation that benefits the public, as prescribed. The bill would, on or before January 1, 2026, require the Government Operations Agency to submit a report from the consortium to the Legislature with that framework.

The bill would make those provisions operative only upon an appropriation in a budget act for its purposes.
Existing constitutional provisions require that a statute that limits the right of access to the meetings of public bodies or the writings of public officials and agencies be adopted with findings demonstrating the interest protected by the limitation and the need for protecting that interest.
This bill would make legislative findings to that effect.

Digest Key
vote: MAJORITY Appropriation: NO Fiscal Committee: YES Local Program: NO
Bill Text
The people of the State of California do enact as follows:

SECTION 1. This act shall be known, and may be cited, as the Safe and Secure Inn

ovation for Frontier Artificial Intelligence Models Act.
SEC. 2. The Legislature finds and declares aLL of the following:
(a) California is leading the world in artificial intelligence innovation and research, through companies large and small, as well as through our remarkable public and private universities.
(b) Artificial intelligence, including new advances in generative artificial intelligence, has the potential to catalyze innovation

and the rapid development of a wide range of benefits for Californians and the California economy, including advances in medicine, wildfire forecasting and prevention, and climate science, and to push the bounds of human creativity and capacity.

(c) If not properly subject to human controls, future development in artificial intelligence may also have the potential to be used to create novel threats to public safety and security, including by enabling the creation and the proliferation of weapons of mass destruction, such as biological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities.

(d) The state government has an essential role to play in ensuring that California recognizes the benefits of this technology while avoiding the most severe risks, as well as to ensure that artificial intelligence innovation and access to compute is accessible to academic researchers and startups, in addition to large companies.
SEC. 3. Chapter 22.6 (commencing with Section 22602) is added to Division 8 of the Business and Professions Code, to read:
CHAPTER 22.6. Safe and Secure Innovation for Frontier Artificial Intelligence Models

As used in this chapter:
(a) “Advanced persistent threat” means an adversary with sophisticated levels of expertise and significant resources that allow it, through the use of multiple different attack vectors, including, but not limited to, cyber, physical, and deception, to generate opportunities to achieve its objectives that are typically to establish and extend its presence within the information technology infrastructure of organizations for purposes of exfiltrating information or to undermine or impede critical aspects of a mission, program, or organization or place itself in a position to do so in the future.

(b) “Artificial intelligence” means an engineered or machine-based system that varies in its level of autonomy and that can, for explicit or implicit objectives, infer from the input it receives how to generate outputs that can influence physical or virtual environments.
(c) “Artificial intelligence safety incident” means an incident that demonstrably increases the risk of a critical harm occurring by means of any of the following:
(1) A covered model or covered model derivative autonomously engaging in behavior other than at the request of a user.
(2) Theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a covered model or covered model derivative.

(3) The critical failure of technical or administrative controls, including controls limiting the ability to modify a covered model or covered model derivative.
(4) Unauthorized use of a covered model or covered model derivative to cause or materially enable critical harm.
(d) “Computing cluster” means a set of machines transitively connected by data center networking of over 100 gigabits per second that has a theoretical maximum computing capacity of at least 10^20 integer or floating-point operations per second and can be used for training artificial intelligence.
(e) (1) “Covered model” means either of the following:

(A) Before January 1, 2027, “covered model” means either of the following:
(i) An artificial intelligence model trained using a quantity of computing power greater than 10^26 integer or floating-point operations, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market prices of cloud compute at the start of training as reasonably assessed by the developer.

(ii) An artificial intelligence model created by fine-tuning a covered model using a quantity of computing power equal to or greater than three times 10^25 integer or floating-point operations, the cost of which, as reasonably assessed by the developer, exceeds ten million dollars ($10,000,000) if calculated using the average market price of cloud compute at the start of fine-tuning.
(B) (i) Except as provided in clause (ii), on and after January 1, 2027, “covered model” means any of the following:

(I) An artificial intelligence model trained using a quantity of computing power determined by the Government Operations Agency pursuant to Section 11547.6 of the Government Code, the cost of which exceeds one hundred million dollars ($100,000,000) when calculated using the average market price of cloud compute at the start of training as reasonably assessed by the developer.
(II) An artificial intelligence model created by fine-tuning a covered model using a quantity of computing power that exceeds a threshold determined by the Government Operations Agency, the cost of which, as reasonably assessed by the developer, exceeds ten million dollars ($10,000,000) if calculated using the average market price of cloud compute at the start of fine-tuning.

(ii) If the Government Operations Agency does not adopt a regulation governing subclauses (I) and (II) of clause (i) before January 1, 2027, the definition of “covered model” in subparagraph (A) shall be operative until the regulation is adopted.
(2) On and after January 1, 2026, the dollar amount in this subdivision shall be adjusted annually for inflation to the nearest one hundred dollars ($100) based on the change in the annual California Consumer Price Index for All Urban Consumers published by the Department of Industrial Relations for the most recent annual period ending on December 31 preceding the adjustment.

(f) “Covered model derivative” means any of the following:
(1) An unmodified copy of a covered model.
(2) A copy of a covered model that has been subjected to post-training modifications unrelated to fine-tuning.
(3) (A) (i) Before January 1, 2027, a copy of a covered model that has been fine-tuned using a quantity of computing power not exceeding three times 10^25 integer or floating point operations, the cost of which, as reasonably assessed by the developer, exceeds ten million dollars ($10,000,000) if calculated using the average market price of cloud compute at the start of fine-tuning.

(ii) On and after January 1, 2027, a copy of a covered model that has been fine-tuned using a quantity of computing power not exceeding a threshold determined by the Government Operations Agency, the cost of which, as reasonably assessed by the developer, exceeds ten million dollars ($10,000,000) if calculated using the average market price of cloud compute at the start of fine-tuning.
(B) If the Government Operations Agency does not adopt a regulation governing clause (ii) of subparagraph (A) by January 1, 2027, the quantity of computing power specified in clause (i) of subparagraph (A) shall continue to apply until the regulation is adopted.
(4) A copy of a covered model that has been combined with other software.

(g) (1) “Critical harm” means any of the following harms caused or materially enabled by a covered model or covered model derivative:
(A) The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties.
(B) Mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from cyberattacks on critical infrastructure by a model conducting, or providing precise instructions for conducting, a cyberattack or series of cyberattacks on critical infrastructure.
(C) Mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from an artificial intelligence model engaging in conduct that does both of the following:

(i) Acts with limited human oversight, intervention, or supervision.
(ii) Results in death, great bodily injury, property damage, or property loss, and would, if committed by a human, constitute a crime specified in the Penal Code that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.
(D) Other grave harms to public safety and security that are of comparable severity to the harms described in subparagraphs (A) to (C), inclusive.

(2) “Critical harm” does not include any of the following:
(A) Harms caused or materially enabled by information that a covered model or covered model derivative outputs if the information is otherwise reasonably publicly accessible by an ordinary person from sources other than a covered model or covered model derivative.
(B) Harms caused or materially enabled by a covered model combined with other software, including other models, if the covered model did not materially contribute to the other software’s ability to cause or materially enable the harm.
(C) Harms that are not caused or materially enabled by the developer’s creation, storage, use, or release of a covered model or covered model derivative.

(3) On and after January 1, 2026, the dollar amounts in this subdivision shall be adjusted annually for inflation to the nearest one hundred dollars ($100) based on the change in the annual California Consumer Price index for All Urban Consumers published by the Department of Industrial Relations for the most recent annual period ending on December 31 preceding the adjustment.

(h) “Critical infrastructure” means assets, systems, and networks, whether physical or virtual, the incapacitation or destruction of which would have a debilitating effect on physical security, economic security, public health, or safety in the state.
(i) “Developer” means a person that performs the initial training of a covered model either by training a model using a sufficient quantity of computing power and cost, or by fine-tuning an existing covered model or covered model derivative using a quantity of computing power and cost greater than the amount specified in subdivision (e).

(j) “Fine-tuning” means adjusting the model weights of a trained covered model or covered model derivative by exposing it to additional data.
(k) “Full shutdown” means the cessation of operation of all of the following:
(1) The training of a covered model.
(2) A covered model controlled by a developer.
(3) All covered model derivatives controlled by a developer.
(l) “Model weight” means a numerical parameter in an artificial intelligence model that is adjusted through training and that helps determine how inputs are transformed into outputs.

(m) “Person” means an individual, proprietorship, firm, partnership, joint venture, syndicate, business trust, company, corporation, limited ⇪ company×, association, committee, or any other nongovernmental organization or group of persons acting in concert.
(n) “Post-training modification” means modifying the capabilities of a covered model or covered model derivative by any means, including, but not limited to, fine-tuning, providing the model with access to tools or data, removing safeguards against hazardous misuse or misbehavior of the model, or combining the model with, or integrating it into, other software.

(o) “Safety and security protocol” means documented technical and organizational protocols that meet both of the following criteria:
(1) The protocols are used to manage the risks of developing and operating covered models and covered model derivatives across their life cycle, including risks posed by causing or enabling or potentially causing or enabling the creation of covered model derivatives.
(2) The protocols specify that compliance with the protocols is required in order to train, operate, possess, and provide external access to the developer’s covered model and covered model derivatives.

  1. (a) Before beginning to initially train a covered model, the developer shall do all of the following:

(1) Implement reasonable administrative, technical, and physical cybersecurity protections to prevent unauthorized access to, misuse of, or unsafe post-training modifications of, the covered model and all covered model derivatives controlled by the developer that are appropriate in light of the risks associated with the covered model, including from advanced persistent threats or other sophisticated actors.
(2) (A) Implement the capability to promptly enact a full shutdown.
(B) When enacting a full shutdown, the developer shall take into account, as appropriate, the risk that a shutdown of the covered model, or particular covered model derivatives, could cause disruptions to critical infrastructure.

(3) Implement a written and separate safety and security protocol that does all of the following:
(A) Specifies protections and procedures that, if successfully implemented, would successfully comply with the developer’s duty to take reasonable care to avoid producing a covered model or covered model derivative that poses an unreasonable risk of causing or materially enabling a critical harm.
(B) States compliance requirements in an objective manner and with sufficient detail and specificity to allow the developer or a third party to readily ascertain whether the requirements of the safety and security protocol have been followed.
(C) Identifies a testing procedure, which takes safeguards into account as appropriate, that takes reasonable care to evaluate if both of the following are true:

(i) A covered model poses an unreasonable risk of causing or enabling a critical harm.
(ii) Covered model derivatives pose an unreasonable risk of causing or enabling a critical harm.
(D) Describes in detail how the testing procedure assesses the risks associated with post-training modifications.
(E) Describes in detail how the testing procedure addresses the possibility that a covered model or covered model derivative can be used to make post-training modifications or create another covered model in a manner that may cause or materially enable a critical harm.
(F) Describes in detail how the developer will fulfill their obligations under this chapter.

(G) Describes in detail how the developer intends to implement the safeguards and requirements referenced in this section.
(H) Describes in detail the conditions under which a developer would enact a full shutdown.
(I) Describes in detail the procedure by which the safety and security protocol may be modified.
(4) Ensure that the safety and security protocol is implemented as written, including by designating senior personnel to be responsible for ensuring compliance by employees and contractors working on a covered model, or any covered model derivatives controlled by the developer, monitoring and reporting on implementation.

(5) Retain an unredacted copy of the safety and security protocol for as long as the covered model is made available for commercial, public, or foreseeably public use plus five years, including records and dates of any updates or revisions.
(6) Conduct an annual review of the safety and security protocol to account for any changes to the capabilities of the covered model and industry best practices and, if necessary, make modifications to the policy.
(7) (A) (i) Conspicuously publish a copy of the redacted safety and security protocol and transmit a copy of the redacted safety and security protocol to the Attorney General.
(ii) A redaction in the safety and security protocol may be made only if the redaction is reasonably necessary to protect any of the following:
(I) Public safety.

(II) trade secrets, as defined in Section 3426.1 of the Civil Code.
(III) Confidential information pursuant to state and federal law.
(B) The developer shall grant to the Attorney General access to the unredacted safety and security protocol upon request.
(C) A safety and security protocol disclosed to the Attorney General pursuant to this paragraph is exempt from the California Public Records Act (Division 10 (commencing with Section 7920.000) of title 1 of the Government Code).
(D) If the safety and security protocol is materially modified, conspicuously publish and transmit to the Attorney General an updated redacted copy within 30 days of the modification.

(8) Take reasonable care to implement other appropriate measures to prevent covered models and covered model derivatives from posing unreasonable risks of causing or materially enabling critical harms.
(b) Before using a covered model or covered model derivative for a purpose not exclusively related to the training or reasonable evaluation of the covered model or compliance with state or federal law or before making a covered model or covered model derivative available for commercial or public, or foreseeably public, use, the developer of a covered model shall do all of the following:
(1) Assess whether the covered model is reasonably capable of causing or materially enabling a critical harm.

(2) Record, as and when reasonably possible, and retain for as long as the covered model is made available for commercial, public, or foreseeably public use plus five years information on the specific tests and test results used in the assessment pursuant to paragraph (1) that provides sufficient detail for third parties to replicate the testing procedure.
(3) Take reasonable care to implement appropriate safeguards to prevent the covered model and covered model derivatives from causing or materially enabling a critical harm.
(4) Take reasonable care to ensure, to the extent reasonably possible, that the covered model’s actions and the actions of covered model derivatives, as well as critical harms resulting from their actions, can be accurately and reliably attributed to them.

(c) A developer shall not use a covered model or covered model derivative for a purpose not exclusively related to the training or reasonable evaluation of the covered model or compliance with state or federal law or make a covered model or a covered model derivative available for commercial or public, or foreseeably public, use, if there is an unreasonable risk that the covered model or covered model derivative will cause or materially enable a critical harm.
(d) A developer of a covered model shall annually reevaluate the procedures, policies, protections, capabilities, and safeguards implemented pursuant to this section.
(e) (1) Beginning January 1, 2026, a developer of a covered model shall annually retain a third-party auditor that conducts audits consistent with best practices for auditors to perform an independent audit of compliance with the requirements of this section.

(2) An auditor shall conduct audits consistent with regulations issued by the Government Operations Agency pursuant to subdivision (d) of Section 11547.6 of the Government Code.
(3) The auditor shall be granted access to unredacted materials as necessary to comply with the auditor’s obligations under this subdivision.
(4) The auditor shall produce an audit report including all of the following:
(A) A detailed assessment of the developer’s steps to comply with the requirements of this section.
(B) If applicable, any identified instances of noncompliance with the requirements of this section, and any recommendations for how the developer can improve its policies and processes for ensuring compliance with the requirements of this section.
(C) A detailed assessment of the developer’s internal controls, including its designation and empowerment of senior personnel responsible for ensuring compliance by the developer, its employees, and its contractors.

(D) The signature of the lead auditor certifying the results of the auditor.
(5) The developer shall retain an unredacted copy of the audit report for as long as the covered model is made available for commercial, public, or foreseeably public use plus five years.
(6) (A) (i) The developer shall conspicuously publish a redacted copy of the auditor’s report and transmit to the Attorney General a copy of the redacted auditor’s report.
(ii) A redaction in the auditor’s report may be made only if the redaction is reasonably necessary to protect any of the following:
(I) Public safety.
(II) Trade secrets, as defined in Section 3426.1 of the Civil Code.
(III) Confidential information pursuant to state and federal law.

(B) The developer shall grant to the Attorney General access to the unredacted auditor’s report upon request.
(C) An auditor’s report disclosed to the Attorney General pursuant to this paragraph is exempt from the California Public Records Act (Division 10 (commencing with Section 7920.000) of title 1 of the Government Code).
(7) An auditor shall not knowingly make a material misrepresentation in the auditor’s report.
(f) (1) (A) A developer of a covered model shall annually submit to the Attorney General a statement of compliance with the requirements of this section signed by the chief technology officer, or a more senior corporate officer, that meets the requirements of paragraph (2).

(B) This paragraph applies if the covered model or any covered model derivatives controlled by the developer remain in commercial or public use or remain available for commercial or public use.
(2) In a statement submitted pursuant to paragraph (1), a developer shall specify or provide, at a minimum, all of the following:
(A) An assessment of the nature and magnitude of critical harms that the covered model or covered model derivatives may reasonably cause or materially enable and the outcome of the assessment required by paragraph (1) of subdivision (b).
(B) An assessment of the risk that compliance with the safety and security protocol may be insufficient to prevent the covered model or covered model derivatives from causing or materially enabling critical harms.

(C) A description of the process used by the signing officer to verify compliance with the requirements of this section, including a description of the materials reviewed by the signing officer, a description of testing or other evaluation performed to support the statement and the contact information of any third parties relied upon to validate compliance.
(g) A developer of a covered model shall report each artificial intelligence safety incident affecting the covered model, or any covered model derivatives controlled by the developer, to the Attorney General within 72 hours of the developer learning of the artificial intelligence safety incident or within 72 hours of the developer learning facts sufficient to establish a reasonable belief that an artificial intelligence safety incident has occurred.

(h) (1) A developer shall submit to the Attorney General a statement described by subdivision (f) no more than 30 days after using a covered model or covered model derivative for a purpose not exclusively related to the training or reasonable evaluation of the covered model or compliance with state or federal law or making a covered model or covered model derivative available for commercial or public, or foreseeably public, use for the first time.
(2) This subdivision does not apply with respect to a covered model derivative if the developer submitted a statement described by subdivision (f) for the applicable covered model from which the covered model derivative is derived.

(i) In fulfilling its obligations under this chapter, a developer shall consider industry best practices and applicable guidance from the U.S. Artificial Intelligence Safety Institute, National Institute of Standards and Technology, the Government Operations Agency, and other reputable standard-setting organizations.
(j) (1) This section shall not apply to products or services to the extent that the requirements would strictly conflict with the terms of a contract with a federal government entity and a developer of a covered model.

(2) This section applies to the development, use, or commercial or public release of a covered model or covered model derivative for any use that is not the subject of a contract with a federal government entity, even if that covered model or covered model derivative has already been developed, trained, or used by a federal government entity.

  1. (a) A person that operates a computing cluster shall implement written policies and procedures to do all of the following when a customer utilizes compute resources that would be sufficient to train a covered model:
    (1) Obtain the prospective customer’s basic identifying information and business purpose for utilizing the computing cluster, including all of the following:

(A) The identity of the prospective customer.
(B) The means and source of payment, including any associated financial institution, credit card number, account number, customer identifier, transaction identifiers, or virtual currency wallet or wallet address identifier.
(C) The email address and telephonic contact information used to verify the prospective customer’s identity.

(2) Assess whether the prospective customer intends to utilize the computing cluster to train a covered model.
(3) If a customer repeatedly utilizes computer resources that would be sufficient to train a covered model, validate the information initially collected pursuant to paragraph (1) and conduct the assessment required pursuant to paragraph (2) prior to each utilization.
(4) Retain a customer’s Internet Protocol addresses used for access or administration and the date and time of each access or administrative action.
(5) Maintain for seven years and provide to the Attorney General, upon request, appropriate records of actions taken under this section, including policies and procedures put into effect.

(6) Implement the capability to promptly enact a full shutdown of any resources being used to train or operate models under the customer’s control.
(b) A person that operates a computing cluster shall consider industry best practices and applicable guidance from the U.S. Artificial Intelligence Safety Institute, National Institute of Standards and Technology, and other reputable standard-setting organizations.
(c) In complying with the requirements of this section, a person that operates a computing cluster may impose reasonable requirements on customers to prevent the collection or retention of personal information that the person that operates a computing

cluster would not otherwise collect or retain, including a requirement that a corporate customer submit corporate contact information rather than information that would identify a specific individual.

  1. (a) The Attorney General may bring a civil action for a violation of this chapter and to recover all of the following:
    (1) For a violation that causes death or bodily harm to another human, harm to property, theft or misappropriation of property, or that constitutes an imminent risk or threat to public safety that occurs on or after January 1, 2026, a civil penalty in an amount not exceeding 10 percent of the cost of the quantity of computing power used to train the covered model to be calculated using average market prices of cloud compute at the time of training for a first violation and in an amount not exceeding 30 percent of that value for any subsequent violation.

(2) For a violation of Section 22607 that would constitute a violation of the Labor Code, a civil penalty specified in subdivision (f) of Section 1102.5 of the Labor Code.
(3) For a person that operates a computing cluster for a violation of Section 22604, for an auditor for a violation of paragraph (6) of subdivision (e) of Section 22603, or for an auditor who intentionally or with reckless disregard violates a provision of subdivision (e) of Section 22603 other than paragraph (6) or regulations issued by the Government Operations Agency pursuant to Section 11547.6 of the Government Code, a civil penalty in an amount not exceeding fifty thousand dollars ($50,000) for a first violation of Section 22604, not exceeding one hundred thousand dollars ($100,000) for any subsequent violation, and not exceeding ten million dollars ($10,000,000) in the aggregate for related violations.

(4) Injunctive or declaratory relief.
(5) (A) Monetary damages.
(B) Punitive damages pursuant to subdivision (a) of Section 3294 of the Civil Code.
(6) Attorney’s fees and costs.
(7) Any other relief that the court deems appropriate.
(b) In determining whether the developer exercised reasonable care as required in Section 22603, all of the following considerations are relevant but not conclusive:
(1) The quality of a developer’s safety and security protocol.
(2) The extent to which the developer faithfully implemented and followed its safety and security protocol.

(3) Whether, in quality and implementation, the developer’s safety and security protocol was inferior, comparable, or superior to those of developers of comparably powerful models.
(4) The quality and rigor of the developer’s investigation, documentation, evaluation, and management of risks of critical harm posed by its model.
(c) (1) A provision within a contract or agreement that seeks to waive, preclude, or burden the enforcement of a liability arising from a violation of this chapter, or to shift that liability to any person or entity in exchange for their use or access of, or right to use or access, a developer’s products or services, including by means of a contract of adhesion, is void as a matter of public policy.

(2) A court shall disregard corporate formalities and impose joint and several liability on affiliated entities for purposes of effectuating the intent of this section to the maximum extent allowed by law if the court concludes that both of the following are true:
(A) The affiliated entities, in the development of the corporate structure among the affiliated entities, took steps to purposely and unreasonably limit or avoid liability.
(B) As the result of the steps described in subparagraph (A), the corporate structure of the developer or affiliated entities would frustrate recovery of penalties, damages, or injunctive relief under this section.
(d) Penalties collected pursuant to this section by the Attorney General shall be deposited into the Public Rights Law Enforcement Special fund established pursuant to Section 12530 of the Government Code.

(e) This section does not limit the application of other laws.

  1. (a) A developer of a covered model or a contractor or subcontractor of the developer shall not do any of the following:
    (1) Prevent an employee from disclosing information to the Attorney General or the Labor Commissioner, including through terms and conditions of employment or seeking to enforce terms and conditions of employment if the employee has reasonable cause to believe the information indicates either of the following:
    (A) The developer is out of compliance with the requirements of Section 22603.
    (B) An artificial intelligence model, including a model that is not a covered model or a covered model derivative, poses an unreasonable risk of causing or materially enabling critical harm, even if the employer is not out of compliance with any law.

(2) Retaliate against an employee for disclosing information to the Attorney General or the Labor Commissioner pursuant to paragraph (1).
(3) Make false or materially misleading statements related to its safety and security protocol in a manner that violates Part 2 (commencing with Section 16600) of Division 7 or any other provision of state law.
(b) An employee harmed by a violation of this subdivision may petition a court for appropriate temporary or preliminary injunctive relief as provided in Sections 1102.61 and 1102.62 of the Labor Code.
(c) (1) The Attorney General or Labor Commissioner may publicly release or provide to the Governor any complaint, or a summary of that complaint, pursuant to this section if the Attorney General or the Labor Commissioner concludes that doing so will serve the public interest.

(2) If the Attorney General or the Labor Commissioner publicly releases a complaint, or a summary of a complaint, pursuant to paragraph (1), the Attorney General or the Labor Commissioner shall redact from the complaint any information that is confidential or otherwise exempt from public disclosure pursuant to the California Public Records Act (Division 10 (commencing with Section 7920.000) of Title 1 of the Government Code) and any information that the Attorney General or the Labor Commissioner determines would likely pose an unreasonable risk to public safety if it were disclosed to the public.

(d) A developer shall provide a clear notice to all employees working on covered models and covered model derivatives of their rights and responsibilities under this section, including the right of employees of contractors and subcontractors to use the developer’s internal process for making protected disclosures pursuant to subdivision (e). A developer is presumed to be in compliance with the requirements of this subdivision if the developer does either of the following:
(1) At all times post and display within all workplaces maintained by the developer a notice to all employees of their rights and responsibilities under this section, ensure that all new employees receive equivalent notice, and ensure that employees who work remotely periodically receive an equivalent notice.

(2) No less frequently than once every year, provides written notice to all employees of their rights and responsibilities under this chapter and ensures that the notice is received and acknowledged by all of those employees.
(e) (1) (A) A developer shall provide a reasonable internal process through which an employee may anonymously disclose information to the developer if the employee believes in good faith that the information indicates that the developer has violated any provision of Section 22603 or any other law, or has made false or materially misleading statements related to its safety and security protocol, or failed to disclose known risks to employees, including, at a minimum, a monthly update to the person who made the disclosure regarding the status of the developer’s investigation of the disclosure and the actions taken by the developer in response to the disclosure.

(B) The process required by this paragraph shall apply to employees of the developer’s contractors and subcontractors working on covered models and covered model derivatives and allow those employees to disclose the same information to the developer that an employee of the developer may disclose and provide the same anonymity and protections against retaliation to the employees of the contractor or subcontractor that apply to disclosures by employees of the developer.

(2) The disclosures and responses of the process required by this subdivision shall be maintained for a minimum of seven years from the date when the disclosure or response is created. Each disclosure and response shall be shared with officers and directors of the developer whose acts or omissions are not implicated by the disclosure or response no less frequently than once per quarter. In the case of a report or disclosure regarding alleged misconduct by a contractor or subcontractor, the developer shall notify the officers and directors of the contractor or subcontractor whose acts or omissions are not implicated by the disclosure or response about the status of their investigation no less frequently than once per quarter.

(f) This section does not limit protections provided to employees by Section 1102.5 of the Labor Code, Section 12964.5 of the Government Code, or other law.
(g) As used in this section:
(1) “Employee” has the same meaning as defined in Section 1132.4 of the Labor Code and includes both of the following:
(A) Contractors or subcontractors and unpaid advisors involved with assessing, managing, or addressing the risk of critical harm from covered models and covered model derivatives.
(B) Corporate officers.
(2) “Contractor or subcontractor” has the same meaning as in Section 1777.1 of the Labor Code.

  1. The duties and obligations imposed by this chapter are cumulative with any other duties or obligations imposed under other law and shall not be construed to relieve any party from any duties or obligations imposed under other law and do not limit any rights or remedies under existing law.
  2. This chapter does not apply to the extent that it is preempted by federal law.
    SEC. 4. Section 11547.6 is added to the Government Code, to read:
    11547.6. (a) As used in this section, “critical harm” has the same meaning as defined in Section 22602 of the Business and Professions Code.
    (b) There is hereby established the Board of Frontier Models. The board shall be housed in the Government Operations Agency and shall be independent of the Department of Technology. The Governor may appoint an executive officer of the board, subject to Senate confirmation, who shall [hold]

(https://inleo.io/@leoglossary/leoglossary-hold) the office at the pleasure of the Governor. The executive officer shall be the administrative head of the board and shall exercise all duties and functions necessary to ensure that the responsibilities of the board are successfully discharged.
(c) (1) Commencing January 1, 2026, the Board of Frontier Models shall be composed of nine members, as follows:
(A) A member of the open-source community appointed by the Governor and subject to Senate confirmation.
(B) A member of the artificial intelligence industry appointed by the Governor and subject to Senate confirmation.

(C) An expert in chemical, biological, radiological, or nuclear weapons appointed by the Governor and subject to Senate confirmation.
(D) An expert in artificial intelligence safety appointed by the Governor and subject to Senate confirmation.
(E) An expert in cybersecurity of critical infrastructure appointed by the Governor and subject to Senate confirmation.
(F) Two members who are academics with expertise in artificial intelligence appointed by the Speaker of the Assembly.
(G) Two members appointed by the Senate Rules Committee.
(2) A member of the Board of Frontier Models shall meet all of the following criteria:

(A) A member shall be free of direct and indirect external influence and shall not seek or take instructions from another.
(B) A member shall not take an action or engage in an occupation, whether gainful or not, that is incompatible with the member’s duties.
(C) A member shall not, either at the time of the member’s appointment or during the member’s term, have a financial interest in an entity that is subject to regulation by the board.
(3) A member of the board shall serve at the pleasure of the member’s appointing authority but shall serve for no longer than eight consecutive years.
(d) (1) On or before January 1, 2027, and annually thereafter, the Government

Operations Agency shall issue regulations to update both of the following thresholds in the definition of a “covered model” to ensure that it accurately reflects technological developments, scientific literature, and widely accepted national and international standards and applies to artificial intelligence models that pose significant risk of causing or materially enabling critical harms.
(2) The updated definition shall contain both of the following:
(A) The initial compute threshold that an artificial intelligence model shall exceed to be considered a covered model.
(B) The fine-tuning compute threshold that an artificial intelligence model shall meet to be considered a covered model.

(3) In developing regulations pursuant to this subdivision, the Government Operations Agency shall take into account all of the following:
(A) The quantity of computing power used to train covered models that have been identified as being reasonably likely to cause or materially enable a critical harm.
(B) Similar thresholds used in federal law, guidance, or regulations for the management of artificial intelligence models with reasonable risks of causing or enabling critical harms.
(C) Input from stakeholders, including academics, industry, the open-source community, and government entities.

(e) (1) On or before January 1, 2027, and annually thereafter, the Government Operations Agency shall issue regulations to establish binding auditing requirements applicable to audits conducted pursuant to subdivision (e) of Section 22603 of the Business and Professions Code to ensure the integrity, independence, efficiency, and effectiveness of the auditing process. In developing regulations pursuant to this subdivision, the Government Operations Agency shall take into account both of the following:
(A) Relevant standards or requirements imposed under federal or state law or through self-regulatory or standards-setting bodies.
(B) Input from stakeholders, including academics, industry, and government entities, including from the open-source community.
(2) Any regulations issued pursuant to paragraph (1) shall, at a minimum, be consistent with guidance issued by the U.S. Artificial Intelligence Safety Institute and the National Institute of Standards and Technology.

(f) (1) On or before January 1, 2027, and annually thereafter, the Government Operations Agency shall issue guidance for preventing unreasonable risks of covered models and covered model derivatives causing or materially enabling critical harms, including, but not limited to, more specific components of, or requirements under, the duties required under Section 22603 of the Business and Professions Code.
(2) Any guidance issued pursuant to paragraph (1) shall, at a minimum, be consistent with guidance issued by the U.S. Artificial Intelligence Safety Institute and the National Institute of Standards and Technology.

(g) Regulations and guidance adopted pursuant to this section shall be approved by the Board of Frontier Models before taking effect.
SEC. 5. Section 11547.6.1 is added to the Government Code, to read:
11547.6.1. (a) There is hereby established in the Government Operations Agency a consortium that shall develop, pursuant to this section, a framework for the creation of a public cloud computing cluster to be known as “CalCompute.”
(b) The consortium shall develop a framework for creation of CalCompute that advances the development and deployment of artificial intelligence that is safe, ethical, equitable, and sustainable by doing, at a minimum, both of the following:
(1) Fostering research and innovation that benefits the public.

(2) Enabling equitable innovation by expanding access to computational resources.
(c) The consortium shall make reasonable efforts to ensure that CalCompute is established within the university of California to the extent possible.
(d) CalCompute shall include, but not be limited to, all of the following:
(1) A fully owned and hosted cloud platform.
(2) Necessary human expertise to operate and maintain the platform.
(3) Necessary human expertise to support, train, and facilitate use of CalCompute.
(e) The consortium shall operate in accordance with all relevant labor and workforce laws and standards.

(f) (1) On or before January 1, 2026, the Government Operations Agency shall submit, pursuant to Section 9795, a report from the consortium to the Legislature with the framework developed pursuant to subdivision (b) for creation and operation of CalCompute.
(2) The report required by this subdivision shall include all of the following elements:
(A) A landscape analysis of California’s current public, private, and nonprofit cloud computing platform infrastructure.
(B) An analysis of the cost to the state to build and maintain CalCompute and recommendations on potential funding sources.

(C) Recommendations for the governance structure and ongoing operation of CalCompute.
(D) Recommendations on the parameters for use of CalCompute, including, but not limited to, a process for determining which users and projects will be supported by CalCompute.
(E) An analysis of the state’s technology workforce and recommendations for equitable pathways to strengthen the workforce, including the role of CalCompute.
(F) A detailed description of any proposed partnerships, contracts, or licensing agreements with nongovernmental entities, including, but not limited to, technology-based companies, that demonstrates compliance with the requirements of subdivisions (c) and (d).

(G) Recommendations regarding how the creation and ongoing management of CalCompute can prioritize the use of the current public sector workforce.
(g) (1) The consortium shall, consistent with state constitutional law, consist of 14 members selected from among all of the following:
(A) Representatives of the University of California and other public and private academic research institutions and national laboratories.
(B) Representatives of impacted workforce labor organizations.
(C) Representatives of stakeholder groups with relevant expertise and experience, including, but not limited to, ethicists, consumer rights advocates, and other public interest advocates.

(D) Experts in technology and artificial intelligence to provide technical assistance.
(E) Personnel from other relevant departments and agencies as necessary.
(2) Eight members of the consortium shall be selected by the Secretary of Government Operations, and the President Pro Tempore of the Senate and the Speaker of the Assembly shall each select three members.
(h) If CalCompute is established within the University of California pursuant to subdivision (c), the University of California may receive private donations for the purposes of implementing CalCompute.
(i) This section shall become operative only upon an appropriation in a budget act for the purposes of this section.

SEC. 6. The provisions of this act are severable. If any provision of this act or its application is held invalid, that invalidity shall not affect other provisions or applications that can be given effect without the invalid provision or application.
SEC. 7. This act shall be liberally construed to effectuate its purposes.
SEC. 8. The Legislature finds and declares that Section 3 of this act, which adds Chapter 22.6 (commencing with Section 22602) to Division 8 of the Business and Professions Code, imposes a limitation on the public’s right of access to the meetings of public bodies or the writings of public officials and

agencies within the meaning of Section 3 of article I of the California Constitution. Pursuant to that constitutional provision, the Legislature makes the following findings to demonstrate the interest protected by this limitation and the need for protecting that interest:
Information in unredacted safety and security protocols and auditor’s reports may contain corporate proprietary information or information about covered models and covered model derivatives that could threaten public safety if disclosed to the public.

Biden raised inflation through the roof in America, how is Trump going to drop it to a level that will look like America is better?

This is not technology related.

apologies 🙏

SpaceX wants to test refueling Starships in space early next year

SpaceX will attempt to transfer propellant from one orbiting Starship to another as early as next March, a technical milestone that will pave the way for an uncrewed landing demonstration of a Starship on the moon, a NASA official said this week.

#space #starships #space #technology #nasa

Kent Chojnacki, deputy manager of NASA’s Human Landing System (HLS) program, provided more detail on exactly how the agency is working with the space company as it looks toward that critical mission in an interview with Spaceflight Now. It will come as no surprise that NASA is paying close attention to Starship’s test campaign, which has notched five launches so far.

SpaceX made history during the most recent test on October 13 when it caught the Super Heavy rocket booster mid-air using “chopsticks” attached to the launch tower for the first time.

“We learn a lot each time [a launch] happens,” Chojnacki said.

Chojnacki’s work history includes numerous roles in the Space Launch System (SLS) program, which oversees the development of a massive rocket of the same name that is being built by a handful of traditional aerospace primes. The first SLS rocket launched the Artemis I mission in December 2023, and future rockets will launch the subsequent missions under the Artemis program. No part of the rocket is reusable, however, so NASA is spending upwards of $2 billion on each launch vehicle.

I realized that when AI is used correctly as a learning tool rather than a crutch, it stands out as one of the best tutors available (arguably)...

#ai #tutoring

Tesla shares outstanding history from 2010 to 2024. Shares outstanding can be defined as the number of shares held by shareholders (including insiders) assuming conversion of all convertible debt, securities, warrants and options. This metric excludes the company's treasury shares.
Tesla shares outstanding for the quarter ending June 30, 2024 were 3.481B, a 0.09% increase year-over-year.
Tesla 2023 shares outstanding were 3.485B, a 0.29% increase from 2022.
Tesla 2022 shares outstanding were 3.475B, a 2.63% increase from 2021.
Tesla 2021 shares outstanding were 3.386B, a 4.22% increase from 2020.

Mark Zuckerberg says a lot more AI generated content is coming to fill up your Facebook and Instagram feeds

First we had friends in our social media feeds, then we had influencers. The next natural phase of the evolution is a feed full of AI-created content, says Zuckerberg.

First we had friends. Then we had influencers. And if Mark Zuckerberg is correct, the next big thing in our social media feeds will be AI generated content. Lots of it.

Zuckerberg described our future feeds during Facebook-parent company Meta’s third quarter earnings conference call on Wednesday, describing it as a natural evolution.

#meta #markzuckerberg #instagram #facebook #ai #generativeai

“I think were going to add a whole new category of content which is AI generated or AI summarized content, or existing content pulled together by AI in some way,” the Meta CEO said. “And I think that that’s gonna be very exciting for Facebook and Instagram and maybe Threads, or other kinds of feed experiences over time.”

Zuckerberg touted the company’s Llama large language model and the success of products it powers, such as the Meta AI chatbot that is now used by more than 500 million users every month. But Llama will increasingly play a role across Meta’s business, Zuckerberg said, including tools for business customers and advertisers.

As AI tools become more widespread, AI content will proliferate within social media feeds. Such feeds are actively being worked on inside Meta, Zuckerberg noted. “It’s something we’re starting to test different things around.”

“I don’t know if we know what’s exactly going to work really well yet, but some things are really promising,” he added. “I have high confidence that over the next several years, this will be one of the important trends and one of the important applications.”

Earth’s Invisible Shield Rebounds: The Remarkable 2024 Ozone Recovery

In 2024, the ozone hole over the Antarctic showed a notable reduction in size, ranking as the seventh smallest since monitoring began post-Montreal Protocol.

This improvement is credited to ongoing reductions in CFC emissions and enhanced atmospheric dynamics that transport ozone southward.

#ozone #earth #nature #science #antartic

Ozone Layer Recovery Progress in 2024
In 2024, the annual hole in the ozone layer over Earth’s southern pole was relatively small compared to previous years. NASA and the National Oceanic and Atmospheric Administration (NOAA) estimate that, if current trends continue, the ozone layer could fully recover by 2066.

This year’s peak ozone depletion season, which lasts from September 7 to October 13, saw the ozone hole rank as the seventh smallest since recovery efforts began in 1992, following the Montreal Protocol—a global agreement to phase out ozone-depleting chemicals.

The ozone-depleted region over Antarctica averaged nearly 20 million square kilometers (8 million square miles) this year, covering an area almost three times the size of the contiguous United States. On September 28, the hole reached its largest single-day extent of 22.4 million square kilometers (8.5 million square miles).

he map above shows the size and shape of the ozone hole over the South Pole on the day of its 2024 maximum extent. Moderate ozone losses (orange) are visible amid areas of more potent ozone losses (red). Scientists describe the ozone “hole” as the area in which ozone concentrations drop below the historical threshold of 220 Dobson units.

What Is a Dobson Unit?

The Dobson Unit (DU) is the standard measurement for ozone concentration in Earth’s atmosphere. It quantifies the total amount of ozone in a column of air from the surface to the edge of space. One Dobson Unit equals a 0.01-millimeter layer of pure ozone at standard temperature and pressure. For example, 300 DU would form a 3-millimeter ozone layer if compressed. Scientists use Dobson Units to observe ozone health globally, providing insight into seasonal thinning and recovery patterns.

Japan plans automated cargo transport system to relieve shortage of drivers and cut emissions

Japan is planning an automated cargo transport corridor between Tokyo and Osaka to make up for a shortage of truck drivers.

The amount of funding for the project is not yet set. But it’s seen as one key way to help the country cope with soaring deliveries.

#japan #osaka #technology #cargo #transportation #drivers

A computer graphics video made by the government shows big, wheeled boxes moving along a three-lane corridor, also called an “auto flow road,” in the middle of a big highway. A trial system is due to start test runs in 2027 or early 2028, aiming for full operations by the mid-2030s.

“We need to be innovative with the way we approach roads,” said Yuri Endo, a senior deputy director overseeing the effort at the Ministry of Land, Infrastructure, Transport and Tourism.

Apart from making up for a shrinking labor force and the need to reduce workloads for drivers, the system also will help cut carbon emissions, she said.

“The key concept of the auto flow-road is to create dedicated spaces within the road network for logistics, utilizing a 24-hour automated and unmanned transportation system,” Endo said.

Osaka (Japanese: 大阪市, Hepburn: Ōsaka-shi, pronounced [oːsakaɕi]; commonly just 大阪, Ōsaka [oːsaka] ⓘ) is a designated city in the Kansai region of Honshu in Japan, and one of the three major cities of Japan (Tokyo-Osaka-Nagoya). It is the capital of and most populous city in Osaka Prefecture, and the third-most populous city in Japan, following the special wards of Tokyo and Yokohama. With a population of 2.7 million in the 2020 census, it is also the largest component of the Keihanshin Metropolitan Area, which is the second-largest metropolitan area in Japan[4] and the 10th-largest urban area in the world with more than 19 million inhabitants

Challenging Quantum Supremacy: The Surprising Power of Classical Computers

As the rivalry between quantum and classical computing intensifies, scientists are making unexpected discoveries about quantum systems.

Classical computers outperformed a quantum computer in simulations of a two-dimensional quantum magnet system, showing unexpected confinement phenomena. This discovery by Flatiron Institute researchers redefines the practical limits of quantum computing and enhances understanding of quantum-classical computational boundaries.

#quantum #classical #computing #technology #quantummechanics

Classical Computer Triumphs Over Quantum Advantage
Earlier this year, researchers at the Flatiron Institute’s Center for Computational Quantum Physics (CCQ) announced that they had successfully used a classical computer and sophisticated mathematical models to thoroughly outperform a quantum computer at a task that some thought only quantum computers could solve.

Now, those researchers have determined why they were able to trounce the quantum computer at its own game. Their answer, presented on October 29 in Physical Review Letters, reveals that the quantum problem they tackled — involving a particular two-dimensional quantum system of flipping magnets — displays a behavior known as confinement. This behavior had previously been seen in quantum condensed matter physics only in one-dimensional systems.

This unexpected finding is helping scientists better understand the line dividing the abilities of quantum and classical computers and provides a framework for testing new quantum simulations, says lead author Joseph Tindall, a research fellow at the CCQ.

Clarifying Quantum Boundaries
“There is some boundary that separates what can be done with quantum computing and what can be done with classical computers,” he says. “At the moment, that boundary is incredibly blurry. I think our work helps clarify that boundary a bit more.”

By harnessing principles from quantum mechanics, quantum computers promise huge advantages in processing power and speed over classical computers. While classical computations are limited by the binary operations of ones and zeros, quantum computers can use qubits, which can represent both 0 and 1 simultaneously, to process information in a fundamentally different way.

Sharper and Smarter: “Robotic Cat Eyes” Give Drones Super Sight

Feline-inspired vision technology enhances accuracy in challenging environments, paving the way for smarter, more efficient autonomous systems.

Korean researchers have developed an advanced vision system for autonomous drones and robots, inspired by the unique eye structure of cats. This new technology, using a slit-like aperture and reflective layer, enhances visibility in various lighting conditions, promoting more efficient object detection and recognition.

#drone #roboticcateyes #technology #Korea

Inspired by Nature: The Cat’s Eye
Autonomous systems like drones, self-driving cars, and robots are becoming more integrated into daily life, yet they often struggle to “see” clearly in varied conditions—whether it’s bright sunlight, low light, or busy, complex backgrounds. Remarkably, nature may hold the solution.

Cats are renowned for their impressive vision in both daylight and darkness. Their eyes are specially adapted: during the day, vertical slit-shaped pupils help them focus sharply and reduce glare. At night, these pupils widen to allow in more light, while a reflective layer called the tapetum lucidum enhances their night vision, giving their eyes that distinct glow.

Technological Leap: Feline-Inspired Vision Systems
A group of Korean researchers led by Professor Young Min Song from Gwangju Institute of Science and Technology (GIST) designed a new vision system that uses an advanced lens and sensors inspired by feline eyes. The system includes a slit-like aperture that, like a cat’s vertical pupil, helps filter unnecessary light and focus on key objects. It also uses a special reflective layer similar to the one found in cat eyes that improves visibility in low-light conditions.

This research was published recently in the journal Science Advances and represents a significant advancement in artificial vision systems, demonstrating enhanced object detection and recognition capabilities and positioning it at the forefront of technological breakthroughs in autonomous robotics.

AI Slop Is Flooding Medium

The blogging platform Medium is facing an influx of AI-generated content. CEO Tony Stubblebine says it “doesn’t matter” as long as nobody reads it

AI slop is flowing onto every major platform where people post online—and Medium is no exception.

The 12-year-old publishing platform has undertaken a dizzying number of pivots over the years. It’s finally on a financial upswing, having turned a monthly profit for the first time this summer. Medium CEO Tony Stubblebine and other executives at the company have described the platform as “a home for human writing.” But there is evidence that robot bloggers are increasingly flocking to the platform, too.

#ai #medium #technology #generativeai #concentcreation

Earlier this year, WIRED asked AI detection startup Pangram Labs to analyze Medium. It took a sampling of 274,466 recent posts over a six-week period and estimated that over 47 percent were likely AI-generated. “This is a couple orders of magnitude more than what I see on the rest of the internet,” says Pangram CEO Max Spero. (The company’s analysis of one day of global news sites this summer found 7 percent as likely AI-generated.)

The strain of slop on Medium tends toward the banal, especially compared with the dadaist flotsam clogging Facebook. Instead of Shrimp Jesus, one is more apt to see vacant dispatches about cryptocurrency. The tags with the most likely AI-generated content included “NFT”—out of 5,712 articles tagged with this phrase over the last several months, Pangram found that 4,492, or around 78 percent, came back as likely AI-generated—as well as “web3,” “ethereum,” “AI,” and, for whatever reason, “pets.”

WIRED asked a second AI detection startup, Originality AI, to run its own analysis. It examined a sampling of Medium posts from 2018 and compared it with a sampling from this year. In 2018, 3.4 percent were estimated as likely AI-generated. CEO Jon Gillham says that percentage corresponds to the company’s false-positive rate, as AI tools were not widely used at that point. For 2024, with a sampling of 473 articles published this year, it suspected that just over 40 percent were likely AI-generated. With no knowledge of each others’ analyses, both Originality and Pangram came to similar conclusions about the scope of AI content.

“Impact printing” is a cement-free alternative to 3D-printed structures

Impact Printing uses high speed jets of material found at building sites to form load bearing structures.

Recently, construction company ICON announced that it is close to completing the world’s largest 3D-printed neighborhood in Georgetown, Texas. This isn’t the only 3D-printed housing project. Hundreds of 3D-printed homes are under construction in the US and Europe, and more such housing projects are in the pipeline.

#cementfree #construction #icon #3dprinting #materials #homes

There are many factors fueling the growth of 3D printing in the construction industry. It reduces the construction time; a home that could take months to build can be constructed within days or weeks with a 3D printer. Compared to traditional methods, 3D printing also reduces the amount of material that ends up as waste during construction. These advantages lead to reduced labor and material costs, making 3D printing an attractive choice for construction companies.

A team of researchers from the Swiss Federal Institute of Technology (ETH) Zurich, however, claims to have developed a robotic construction method that is even better than 3D printing. They call it impact printing, and instead of typical construction materials, it uses Earth-based materials such as sand, silt, clay, and gravel to make homes. According to the researchers, impact printing is less carbon-intensive and much more sustainable and affordable than 3D printing.

This is because Earth-based materials are abundant, recyclable, available at low costs, and can even be excavated at the construction site. “We developed a robotic tool and a method that could take common material, which is the excavated material on construction sites, and turn it back into usable building products, at low cost and efficiently, with significantly less CO2 than existing industrialized building methods, including 3D printing,” said Lauren Vasey, one of the researchers and an SNSF Bridge Fellow at ETH Zurich.

Housing of the Future Must Be Different

from Housing We Have Known
Homebuilding hasn't changed since the Middle Ages.

To address the global housing crisis, something radical and courageous needs to happen. Construction-scale 3D printing is designed to not only deliver high-quality homes faster and more affordably, but fleets of printers can change the way entire communities are built for the better.

.......
Jason Ballard / ICON Co-Founder & CEO

How it All Started

In 2018, we told people we were going to 3D print a house and unveil it during SXSW in Austin, TX before we knew how to do it. Innovation is synonymous with risk and somebody had to take a risk. In partnership with housing non-profit New Story, we successfully delivered the world's first permitted, 3D-printed home in the world.

Fast forward to today, ICON has 3D-printed more than 140 homes and structures across the U.S. and Mexico. We recently unveiled a new suite of technologies and products to further automate construction including a radical new robotic printer that enables multi-story construction, a new low-carbon building material, a digital catalog for residential architecture, and an AI Architect for home design and construction. Together, these technologies make our construction technology platform a faster, more sustainable way to build high-quality housing affordably around the world.

ICON Unveils New Construction Technologies for Lowest Cost, Fastest, and Most Sustainable Way to Build at Scale

During SXSW, Leading Construction Technology Company ICON Announces A Multi-story 3D Printer, A New AI Tool for Architecture and Project Management, A Digital Catalog of Home Designs, and Advanced Low-carbon Concrete

AUSTIN, TX, March 12, 2024 – At a large event during SXSW® dubbed “Domus Ex Machina” ICON, the pioneer of advanced construction technologies and large-scale 3D printing, announced a new suite of products and technologies designed to further automate construction including a radical new robotic printer that enables multi-story construction, a new low-carbon building material, a digital catalog for residential architecture with more than 60 ready-to-build home designs, and an AI Architect™ for home design and construction.

ICON believes that together these technologies make its construction technology platform a faster, more sustainable way to build high-quality housing affordably around the world.

“This is the moment we’ve really been working for these past six years,” said Jason Ballard, ICON Co-Founder and CEO. “When we launched the company and the first permitted 3D-printed house in 2018 during SXSW, we set out to both decrease the cost and increase the quality of building instead of choosing one or the other. We didn’t want to just be the best at 3D printing, we wanted to be the best at building, period. Now, I believe we can say that is a reality. I am so proud of the work ICON has done to get to where we are today– not only the promise, but the reality of technology, architecture and materials that will allow us to build better than anyone in the world.

ICON’s new suite of robotics, software and materials include:

Phoenix™: ICON’s new multi-story robotic construction system introduces the capability of printing an entire building enclosure including foundations and roof structures. By increasing speed and size and decreasing setup time and the number of required operators, this advanced robotic system will reduce ICON printing costs by half. ICON is now taking orders for projects using Phoenix starting at $25/square foot for wall systems or $80/square foot including foundation and roof. This cost to build is lower than the most recent publicly available data for conventional construction of wall systems*. This wall system cost would represent a savings of up to $25,000 for the average American home versus conventional construction. The first engineering prototype of Phoenix has completed a 27-foot-tall architectural demonstration structure, now on display in Austin, TX.

CODEX™: ICON’s digital catalog of ready-to-print home architecture features more than 60 designs across five collections: Texas modern, fire resilient, storm resilient, affordable, and avant garde. The aim of CODEX is to make high-design and high-performance residential architecture available at all price points. CODEX allows builders, developers, and home buyers to build with ICON quickly and affordably using world-class architecture. ICON’s aim is for CODEX to be the most comprehensive digital catalog of buildable home designs in the world. It empowers ICON customers to select preferred designs as a starting point for their master planned communities and developments. ICON will continue to introduce new collections and will also partner with and compensate architects all over the world to feature their designs.

Three of the CODEX collections available today were designed by BIG-Bjarke Ingels Group. Architects can submit their designs and developers can explore the collections and initiate projects with ICON today at https://codex.iconbuild.com.

CarbonX™: ICON’s CarbonX is a new low-carbon extrudable/printable concrete formula. When paired with ICON’s wall system and robotic construction methods, ICON’s CarbonX formula is the lowest carbon residential building system ready to be used at scale. A white paper co-authored with the MIT Concrete Sustainability Hub, “Reducing carbon emissions in the built environment: A case study in 3D-printed homes” (published March 12, 2024), features a case study of 3D-printed homes employing CarbonX. The life cycle assessment results of the white paper show that the embodied and operational impacts of 3D-printed homes are lower than stick-framed construction. ICON will be shipping CarbonX to the field in April 2024.

ICON has also announced that it will make its material available to other projects and customers, not only its own 3D-printed projects. Future formulations of CarbonX are already in development to reduce carbon footprint even further and are expected to be announced in the coming year.

Vitruvius™: An AI system for designing and building homes. The ultimate goal of Vitruvius is to take human and project inputs and produce robust architecture, plans, permit-ready designs, budgets, and schedules. Launched today with an open beta, Vitruvius will help anyone design homes and generate floor plans, interior renders, and exterior renders in minutes based on their own desires, budgets, and feedback. By the end of this year, Vitruvius will progress all the way through schematic designs and in the following year ICON believes its AI architect will be able to produce full construction documents as well as permit-ready designs, budgets, and schedules. What truly makes Vitruvius unique is the combination of design and construction know-how. That knowledge is what allows Vitruvius to produce designs that can actually be built. Join the list to be the first to experience Vitruvius beta at vitruvius.ai.

Ballard continued, “In the future, I believe nearly all construction will be done by robots, and nearly all construction-related information will be processed and managed by AI systems. It is clear to me that this is the way to cut the cost and time of construction in half while making homes that are twice as good and more faithfully express the values and hopes of the people who live in them. We are going to need the same velocity of ambitious technological breakthroughs that we’ve experienced in these past few years, but we know where we are headed. Going forward, ICON is an AI and robotics company focused on transforming the way we build and accelerating what we believe is a very exciting future. Vitruvius will become the default method for ICON in designing custom homes. We intend to be selling and building Vitruvius-designed homes beginning this year.”

During the showcase event, Initiative 99™ winning designs were also revealed from phase I of the global architecture competition to reimagine affordable housing that could be built for $99,000 or less without sacrificing beauty, dignity, comfort, sustainability, or resiliency. More than 60 countries from all over the world were represented in the submissions with six winners and ten honorable mentions being awarded prize money from the $1M prize purse during the event and presented by Wells Fargo, lead supporter of the competition. The first, second, and third place designs in each category will each have their home designs featured as a collection in ICON’s CODEX.

Wells Fargo also announced on stage their foundation has committed $500,000 in grant funding to Mobile Loaves & Fishes, the Austin nonprofit that has been faithfully serving the area’s homeless community for more than 25 years, to help bring to life Initiative 99-designed homes and see multiple homes built at Community First! Village (CFV) to serve the underhoused community. Upon completion of Phase II of the global design competition, ICON and Mobile Loaves & Fishes will select one winning design for ICON to deliver multiple units within CFV’s expansion of their master planned development in Austin, TX.

“ICON’s innovative 3D-printed technology paired with these beautiful, imaginative Initiative 99 designs represent a model for the future of affordable housing. Wells Fargo is proud to help make these homes a reality,” said Darlene Goins, President of the Wells Fargo Foundation.

ICON and partner Liz Lambert also unveiled further plans to expand the new El Cosmico™ in Marfa, TX to include Initiative 99 winning designs where the existing bohemian campground is located. This adds to the newly designed and reimagined 60+ acre expansion of El Cosmico in far west Texas that will feature a hotel, hospitality amenities, and homes. The project breaks ground this year.

The latest announcements from ICON will provide the tools and technology to deliver more beautiful and resilient neighborhoods, communities and subdivisions enabled by the design freedom and new possibilities of 3D-printing.

“If you are a person who wants to own an ICON home, we want to hear from you so that we know what to design, where to build and what your hopes are for your own future,” Ballard said. “If you are a developer who needs support to deliver your project ahead of schedule and under budget and feel good about what you’ve created in the world, we want to build with you.

If you are a builder who wants to take the most advanced construction tools in the world with you into the field, we want to work with you. If you are an architect, who wants to help us develop entirely new design languages and architectural vernaculars that align with your culture, values and imagination, we want to work together. We want to bring the entire industry together and equip everyone with the tools to properly build our future.”

Biomedical Imaging Breakthrough: Silver Nanoislands Amplify Signals 10,000,000x

Researchers at Osaka University have developed a new method for enhancing fluorescence and Raman spectroscopy signals using a dense random array of silver nanoislands and a protective silica layer.

This breakthrough significantly amplifies the detection capabilities without damaging the cells, offering potential applications in environmental monitoring and medical diagnostics.

#biotech #biomedical #technology #osaka #university #japan #cells

Advanced Spectroscopy Techniques
Today’s biologists can explore the intricate structures inside living cells with tools far beyond the traditional light microscope. Techniques like fluorescence and Raman spectroscopy have become essential for monitoring biological processes non-invasively.

These methods use a light source—typically a laser—to stimulate electronic transitions in fluorescence or molecular vibrations in Raman spectroscopy.

Challenges in Current Spectroscopic Methods
Despite their usefulness, these techniques come with challenges. Fluorescent tags can interfere with normal cell functions, and Raman signals are often very weak. Increasing the laser’s power or exposure time to strengthen the signal can damage sensitive biological molecules.

To overcome this, researchers have developed surface-enhanced versions of these methods using metal substrates or nanostructures to amplify the signal. However, these enhancements can also pose risks to cell integrity.

Breakthrough in Signal Enhancement
Now, in a study published on October 28 in the journal Light: Science & Applications, scientists from Osaka University described a new method for the long-range enhancement of fluorescence and Raman signals using a dense random array of Ag nanoislands.

The analyte molecules are kept separate from metal structures using a 100-nm thick column-structured silica layer. This layer is thick enough to protect the molecules being studied, but at the same time thin enough for the collective electromagnetic oscillations in the metal layer, called plasmons, to enhance the spectroscopic signal.

“We demonstrated that the range of influence of plasmons in metals can exceed 100 nanometers, far beyond what conventional theory predicted,” lead author Takeo Minamikawa says.

US Space Force warns of “mind-boggling” build-up of Chinese capabilities

Russia and China “have developed and demonstrated the ability to conduct war fighting in space.”

Both Russia and China have tested satellites with capabilities that include grappling hooks to pull other satellites out of orbit and “kinetic kill vehicles” that can target satellites and long-range ballistic missiles in space.

#russia #china #unitedstates #spaceforce #technology #Miliary #satellites

In May, a senior US defense department official told a House Armed Services Committee hearing that Russia was developing an “indiscriminate” nuclear weapon designed to be sent into space, while in September, China made a third secretive test of an unmanned space plane that could be used to disrupt satellites.

The US is far ahead of its European allies in developing military space capabilities, but it wanted to “lay the foundations” for the continent’s space forces, Saltzman said. Last year UK Air Marshal Paul Godfrey was appointed to oversee allied partnerships with NATO with the US Space Force—one of the first times that a high-ranking allied pilot had joined the US military.

But Saltzman warned against a rush to build up space forces across the continent.

“It is resource-intensive to separate out and stand up a new service. Even ... in America where we think we have more resources, we underestimated what it was going to take,” he said.

The US Space Force, which monitors more than 46,000 objects in orbit, has about 10,000 personnel but is the smallest department of the US military. Its officers are known as “guardians.”

The costs of building up space defense capabilities mean the US is heavily reliant on private companies, raising concerns about the power of billionaires in a sector where regulation remains minimal.

SpaceX, led by prominent Trump backer Elon Musk, is increasingly working with US military and intelligence through its Starshield arm, which is developing low Earth orbit satellites that track missiles and support intelligence gathering.

As hospitals struggle with IV fluid shortage, NC plant restarts production

The initial batches will be shipped in late November at the earliest.

The western North Carolina plant that makes 60 percent of the country's intravenous fluid supply has restarted its highest-producing manufacturing line after being ravaged by flooding brought by Hurricane Helene last month.

While it's an encouraging sign of recovery as hospitals nationwide struggle with shortages of fluids, supply is still likely to remain tight for the coming weeks.

#hospitals #iv #northcarolina #health #hurricanehelene

IV fluid maker Baxter Inc, which runs the Marion plant inundated by Helene, said Thursday that the restarted production line could produce, at peak, 25 percent of the plant's total production and about 50 percent of the plant's production of one-liter IV solutions, the product most commonly used by hospitals and clinics.

“Recovery progress at our North Cove site continues to be very encouraging," Baxter CEO and President José Almeida said. "In a matter of weeks, our team has advanced from the depths of Hurricane Helene’s impact to restarting our highest-throughput manufacturing line. This is a pivotal milestone, but more hard work remains as we work to return the plant to full production."

Overall, Baxter said it is ahead of its previously projected timeline for getting the massive plant back up and running. Previously, the company said it had aimed to produce 90–100 percent of some products by the end of the year. Still, the initial batches now under production are expected to start shipping in late November at the earliest.

Google rolls out Gemini AI features across Maps, Earth, and Waze

Enhanced with conversational queries and more detailed insights

What just happened? These updates are a significant step forward in integrating generative AI with geospatial technologies. As these features roll out to users, developers, and urban planners, Google hopes they will transform how we interact with our physical environment.

#google #gemini #ai #maps #earth #wazes #technology

Google is updating its mapping platforms with new generative AI capabilities based on its Gemini AI model. The updates, rolling out across Google Maps, Google Earth, and Waze, aim to enhance these services and solve complex geospatial problems.

Starting this week, Google Maps users in the US on Android and iOS will receive more detailed and contextual search results powered by Gemini AI. The new feature allows users to make conversational requests, such as asking for suggestions for a night out with friends in a specific city. Gemini curates responses using the vast database of places and user reviews within Google Maps.

ChatGPT now has its own web search engine

OpenAI is challenging Google in its own search turf

AI vs Web: Alphabet attempted to curb ChatGPT's growing popularity by accelerating the launch of its AI-based consumer services. However, the chatbot continued to gain market share and public interest. Now, it's going on the offensive, challenging Google's monopolistic position in the web search business.

#chatgpt #openai #google #search #searchengine

OpenAI has just launched its new ChatGPT Search service, which offers a novel way to use the AI chatbot to find relevant information on the web. According to OpenAI, ChatGPT is now significantly more proficient at internet searching, offering "fast, timely" answers to users' questions. The chatbot's prompt-based interface can now work alongside up-to-date information and data, though it may still produce occasional hallucinations.

The web search feature will be available on the ChatGPT main site as well as through official apps for desktop PCs and mobile devices. Access is afforded to all Team and ChatGPT Plus users, while enterprise and education customers will receive it in the coming weeks. All free users will eventually gain access over the next few months.

OpenAI noted that today's web search is not as useful as it once was, claiming that obtaining relevant answers often requires "a lot of effort." With ChatGPT Search, users will no longer need to conduct multiple search queries or browse links. Instead, the chatbot can provide a "better answer" to search requests and further refine results through follow-up questions.

ChatGPT search results will now include the sources consulted by the chatbot, displayed in a sidebar on the right. OpenAI explains that its search model uses a fine-tuned version of GPT-4, trained with novel, "synthetic" data generation techniques. These search results are powered by unnamed third-party search providers, along with high-quality content partners to further refine accuracy.

Windows 11 reaches 35% market share, but Windows 10 still leads by a wide margin

It took three years, but users are warming up to Microsoft's latest OS

What it means The growing adoption of Windows 11 is a positive sign for Microsoft, indicating that users are gradually overcoming initial hesitations about the OS. However, with Windows 10 still commanding most of the market share, Microsoft must speed up this transition to ensure a smooth handover before Windows 10 support ends.

#windows11 #windows10 #microsoft #computers #technology

Microsoft's Windows 11 is finally gaining significant traction in the market, nearly three years after its initial release. According to recent data from Statcounter, Windows 11 reached an all-time high market share of 35.55 percent in October 2024 – an increase of 2.13 percentage points from the previous month.

This acceleration is a notable shift from the sluggish growth of the past year. In October 2023, Windows 11 held just 26.17 percent of the market share, with minimal month-to-month changes and even occasional declines. The recent surge suggests that users are increasingly embracing Microsoft's latest operating system.

As Windows 11 gains ground, Windows 10 is experiencing a proportional decline in its user base. The older operating system now accounts for 60.95 percent of all Windows users, approaching the 60 percent mark for the first time since September 2019. This represents a decrease of 1.8 percentage points from the previous month.

GTA Online on PC will finally catch up to console versions in 2025

Better late than never

Highly anticipated: Grand Theft Auto Online's PC players have been eagerly anticipating the current-gen console upgrades to make their way to the PC version for years now. Well, the wait is finally over... kind of. Rockstar casually slipped in the news that the highly requested PlayStation 5 and Xbox Series X|S features are coming to the PC platform sometime in 2025.

#gaming #grandtheftauto #pc #technology

The announcement was buried at the bottom of a recent GTA Online community update.

"There is much more still to come, including ongoing weekly special events and bonuses, festive celebrations, gifts, surprises, as well as plans to bring the much-requested PlayStation 5 and Xbox Series X|S features of GTA Online to the PC platform in the new year. Please stay tuned to the Rockstar Games Newswire for details," notes the release.

As for what these "much-requested" features are, they're related to the enhanced edition of GTA V that launched for PS5 and Xbox Series consoles back in 2022. With it arrived a host of visual and gameplay improvements. From a graphics perspective, it introduced goodies like more presets, ray-traced shadows, improved anti-aliasing, better lighting, and other enhancements, such as increased population and traffic variety.

Tesla Cybertruck Killing Ford F150 Lightning Demand

Back in January, 2024 Ford cut production of the 2024 Ford F-150 Lightning in half. A few months later, Ford also trimmed its workforce at the Rouge plant,

Back in January, 2024 Ford cut production of the 2024 Ford F-150 Lightning in half. A few months later, Ford also trimmed its workforce at the Rouge plant, shifting some to other facilities and offering others retirement packages. Production of the 2025 Ford F-150 was slated to begin in November, but now, Ford will instead idle assembly lines for seven weeks.

#ford #tesla #cybertruck #f150 #ev #production

Automotive News, FoMoCo will idle production of the Ford F-150 Lightning at the Rouge facility for seven weeks, starting at the end of the day on November 15th, 2024 and resuming on January 6th, 2025.

Tesla Cybertruck has over double the sales of the Ford F150 Lightning after only starting sales this year.

Tesla Cybertruck is clearly killing demand for the Ford F150 Lightning.

Ford was selling about 1700 per month and sales are falling. The Ford September sales were before the lower cost Cybertruck arrived in October.

Tesla has shifted from selling $100,000-140,000 foundation models to $80,000 model Cybertrucks in October, 2024. Ford sales had already fallen before the new lower priced models. Ford sales have likely fallen below 1000 units per month and will have inventory through February, 2025.

From Wikipedia:

The Tesla Cybertruck is a battery electric pickup truck built by Tesla, Inc. since 2023.[6] Introduced as a concept vehicle in November 2019, it has a controversial body design reminiscent of low-polygon modelling, consisting of flat stainless steel sheet panels.

Tesla initially planned to produce the vehicle in late 2021, but after many delays, it entered production in mid-2023 and was first delivered to customers in November 2023. Two models are currently offered: a tri-motor all-wheel drive (AWD) model called Cyberbeast, and a dual-motor AWD model. A single-motor rear-wheel drive (RWD) model is slated to be available in 2025. EPA range estimates cover 250–340 miles (400–550 km), varying by model. As of December 2023, the Cybertruck was available only in North America

Background

Tesla CEO Elon Musk's ideas for a pickup truck were first stated publicly in 2012 and 2013, envisioning to build a "Tesla supertruck with crazy torque, dynamic air suspension, and corners like it's on rails". In early 2014 Musk predicted 4–5 years before work could start on the product, then in a 2014 interview with CNN, Musk stated that the Tesla pickup would be the equivalent of a Ford F-150. In mid-2016, the outline for a consumer pickup truck was included in part 2 of the Tesla Master Plan. Musk suggested that the same chassis could be used for a van and a pickup truck. In 2017, Musk teased the picture of a "pickup truck that can carry a pickup truck" was displayed at the official reveal for the Tesla Semi and Roadster.

In March 2019, following the Tesla Model Y launch, Musk distributed a teaser image of a vehicle described as having a cyberpunk or Blade Runner style, with the form resembling a futuristic armored personnel carrier. It was rumored to be named the Model B.[ On November 6, 2019, Tesla filed for a trademark on "Cybrtrk", which was granted by the United States Patent and Trademark Office but was later abandoned on August 10, 2020.

Thousands of hacked TP-Link routers used in years-long account takeover attacks

The botnet is being skillfully used to launch “highly evasive” password-spraying attacks.

Hackers working on behalf of the Chinese government are using a botnet of thousands of routers, cameras, and other Internet-connected devices to perform highly evasive password spray attacks against users of Microsoft’s Azure cloud service, the company warned Thursday.

#hacker #technology #china #internet #cybersecurity

The malicious network, made up almost entirely of TP-Link routers, was first documented in October 2023 by a researcher who named it Botnet-7777. The geographically dispersed collection of more than 16,000 compromised devices at its peak got its name because it exposes its malicious malware on port 7777.

Account compromise at scale
In July and again in August of this year, security researchers from Serbia and Team Cymru reported the botnet was still operational. All three reports said that Botnet-7777 was being used to skillfully perform password spraying, a form of attack that sends large numbers of login attempts from many different IP addresses. Because each individual device limits the login attempts, the carefully coordinated account-takeover campaign is hard to detect by the targeted service.

On Thursday, Microsoft reported that CovertNetwork-1658—the name Microsoft uses to track the botnet—is being used by multiple Chinese threat actors in an attempt to compromise targeted Azure accounts. The company said the attacks are “highly evasive” because the botnet—now estimated at about 8,000 strong on average—takes pains to conceal the malicious activity.

“Any threat actor using the CovertNetwork-1658 infrastructure could conduct password spraying campaigns at a larger scale and greatly increase the likelihood of successful credential compromise and initial access to multiple organizations in a short amount of time,” Microsoft officials wrote. “This scale, combined with quick operational turnover of compromised credentials between CovertNetwork-1658 and Chinese threat actors, allows for the potential of account compromises across multiple sectors and geographic regions.

Solving the 7777 Botnet enigma: A cybersecurity quest

Discover 7777 botnet (aka Quad7) and its activity, targets, and use of TP-Link routers in Microsoft 365 attacks in our latest investigation.

Key Takeaways
Sekoia.io investigated the mysterious 7777 botnet (aka. Quad7 botnet), published by the independent researcher Gi7w0rm inside the “The curious case of the 7777 botnet” blogpost.

This investigation allowed us to intercept network communications and malware deployed on a TP-Link router compromised by the Quad7 botnet in France.

To our understanding, the Quad7 botnet operators leverage compromised TP-Link routers to relay password spraying attacks against Microsoft 365 accounts without any specific targeting.

Therefore, we link the Quad7 botnet activity to possible long term business email compromise (BEC) cybercriminal activity rather than an APT threat actor.

However, certain mysteries remain regarding the exploits used to compromise the routers, the geographical distribution of the botnet and the attribution of this activity cluster to a specific threat actor.

The insecure architecture of this botnet led us to think that it can be hijacked by other threat actors to install their own implants on the compromised TP-Link routers by using the Quad7 botnet accesses.

Therefore, we link the Quad7 botnet activity to possible long term business email compromise (BEC) cybercriminal activity rather than an APT threat actor.

However, certain mysteries remain regarding the exploits used to compromise the routers, the geographical distribution of the botnet and the attribution of this activity cluster to a specific threat actor.

The insecure architecture of this botnet led us to think that it can be hijacked by other threat actors to install their own implants on the compromised TP-Link routers by using the Quad7 botnet accesses.

At Sekoia.io, we have detected these attacks on 0.11% of our monitored Microsoft 365 accounts and have been tracking this botnet since our Intrinsec colleagues shared their findings with us. As this botnet was quite mysterious, targeting our customers and nobody had published on it since Gi7w0rm’s blog post, “The Curious Case of the 7777 Botnet,” we decided to investigate it.

This blog post will present the full investigation, our successes, and our failures, as it is always interesting to be transparent and provide feedback to the threat intelligence community and teams that may deal with similar IOT/SOHO threats in the future.

Are all of these compromised TP-Links?

When we started our investigation on this threat, we began by examining what kind of assets had been compromised. This botnet is quite old and constantly evolving, with the number of unique IP addresses involved dropping from 16,000 in August 2022, to ~7,000 in July 2024. The geographic distribution of compromised devices is quite surprising, as Bulgaria remains the most infected country, followed by Russia, the US, and Ukraine, as shown below.

According to open-source publications, the Quad7 botnet is suspected to target different kinds of IOTs including IP cameras or NAS devices and SOHO routers, predominantly TP-Link. However, our investigation found that almost all – we cannot be completely certain – compromised assets were in fact TP-Link routers.

The bias in the analysis of compromised assets results from the fact that the operators of the Quad7 botnet try to disable the TP-Link management interface after compromising it by stopping the binary acting as a web server. Therefore, no TP-Link associated interface or banner is present in many results of online scanners such as Shodan or Censys.

To confirm this hypothesis, we identified valuable information on the TCP windows size returned by compromised assets on the TCP ports 11288 and 7777. We used hping3 tool to scan compromised IP addresses and it turns out that most of the compromised devices participating in the Quad7 botnet have a windows size known to be related to old versions of the Linux kernel used by TP-Link routers. For many TP-Link products two TCP windows size values stand out: 5840 (mostly) and 5760.

We manually checked some other strange window sizes that we had on approximately fifty IP addresses and according to online scanners history their TP-Link administration panels were open on the Internet in the past. These strange window sizes can sometimes be explained, for example sometimes satellite-link providers are expanding window size for performance gain. Therefore, at this time we were convinced that the Quad7 botnet operators had at least one exploit chain to gain a remote code execution (RCE) against several management interfaces of TP-Link products.

First attempts to catch the Quad7 botnet

Based on these initial findings, we decided to monitor a TP-Link WR841N (firmware: 3.16.9 Build 150320 Rel.57500n) router for a few months. This model is the most compromised according to Censys, with a firmware version known to be vulnerable to the Quad7 botnet. We provided access to the router from five different IP addresses (three residential IPs in France, one mobile IP in the UK, and one VPS in Bulgaria, the most impacted country).

The router was fully monitored, including its processes, file system, and network activity. We created a setup to conduct remote live forensic analysis whenever something suspicious caught our attention. To do so a Raspberry Pi was connected to the router via UART, serving also as a network tap on the WAN interface, as illustrated in the diagram below. The UART access enabled us to receive alerts via our internal instant messaging application for any suspicious activity in the /tmp/ directory – as the rest of the filesystem is read-only – and at the running processes level.

On the other hand, the network tap wasn’t merely a simple network bridge running tcpdump. We utilised the well-known Python Scapy library to monitor and alert us – via our internal instant messaging service too – about any authenticated access to the management interface and the exploitation of standard vulnerabilities such as command injection, file disclosure, etc. The aim was to identify the vulnerabilities exploited by the Quad7 operators.

As we were unaware of the exact exploit chain used by the Quad7 operators to achieve remote code execution, we also employed Scapy to dynamically modify authentication attempts. This enabled us to accept any credentials provided by attackers attempting to access the management interface, thereby allowing us to observe the final RCE exploitation, if any.

Capturing IOT/SOHO threats with honeypots?
When we set up this system, we were quite enthusiastic about seeing some attacks. However, regarding the Quad7 botnet, we were less enthusiastic as we knew that it seemed to be using an outdated list of IP addresses as targets. Therefore, deploying honeypots with IP addresses that were not on the threat actor list of targets would not allow us to be attacked.

Honeypots are effective tools against standard threats, such as the general noise of cybercriminal activity on the internet (brute force attacks, scanning, and remote code execution at scale when a new CVE is published). However, capturing something more specific is much more difficult, as some threat actors target only residential IPs, specific ASNs or conduct reconnaissance before deploying their final payload to ensure that the targeted device is genuine.

We waited less than a week before observing a notable attack that chained an unauthenticated file disclosure which seems to be not public at this time (according to a Google search) and a command injection. This unauthenticated file disclosure allowed the attacker to retrieve the pair of credentials stored in /tmp/dropbear/dropbearpwd, to replay them in the HTTP Basic authentication of the management interface. Once authenticated, the attacker exploited a known command injection vulnerability in the Parental Control page to achieve the RCE.

We still don’t know the goal of this attack, as the attacker launched Dropbear (a pre-installed lightweight SSH agent) on a higher port, transferred his own BusyBox via the created SSH session, and then left the router after cleaning up their traces. However, it is interesting to note that this threat actor also targeted IP addresses compromised by the Quad7 botnet.

Despite finding an overlap on more than 80 IP addresses between the two attacks during our investigation, we do not believe they are related. This threat actor engages in compromised SOHO hopping (the attack originated from a compromised D-Link router) and utilises SSH for file transfer unlike the Quad7 operators who use the TFTP protocol. Furthermore, this actor does not deactivate the management interface of the compromised router after exploiting it. Consequently, we occasionally observe this actor compromising routers prior the Quad7 botnet operators, who then most of the time close the management interface.

We also observed during this monitoring brute force attempts of the HTTP Basic authentication, exploitation of known file disclosure vulnerabilities affecting TP-Link devices, and instances of DNS records being altered to redirect users to rogue DNS servers for ad distribution. However, these activities seemed more related to standard noise of SOHO/IOT targeting than the Quad7 operations.

It remains unclear why the Quad7 operators persist in maintaining the infrastructure established in 2022 by re-compromising routers upon their restart, rather than expanding their botnet by targeting new IP addresses. One possible reason could be to evade detection by honeypots, as new IP addresses after the The curious case of the 7777 botnet may be honeypots to catch them. Another, and more plausible, explanation is that they haven’t updated their target list for months or even years. This hypothesis would also explain the decrease of compromised assets over the time.

Identifying victims
As our honeypot didn’t yield the expected results, we shifted to a different strategy: identifying some victims in France. Of the eleven IPs observed in 2024, we were able to identify and contact three individuals, requesting their assistance in tapping their routers and physically recovering the Quad7 botnet related malwares.

Why intervene physically when a victim could simply send us their router? The reason is simple: the majority of the file system is read-only (squashfs), and the /tmp/ directory is writable – but in volatile memory. As soon as the router is unplugged, its file system would be reset, making it impossible to retrieve the malicious codes.

Fortunately, one of these attempts was successful, providing us with more insights into the Quad7 botnet operations. Of the other two, one individual replaced his router after receiving our email but did not reply favourably, and the third did not reply at all, possibly thinking our message was a scam.

EUV chip production will soon reach record levels of energy consumption

Chip manufacturing equipment poses increasingly important environmental issues

In a nutshell: Extreme ultraviolet lithography is one of the most complex technological innovations in recent years. EUV machines are essential for producing smaller, more powerful microchips, but they consume massive amounts of power. Worse yet, their thirst for electricity is only expected to grow significantly in the coming years.

#euv #semiconductors #chip #lithography #technicology

According to a recent TechInsights report, fabs equipped with EUV tools could see electricity consumption exceed 54,000 gigawatt-hours (GWh) annually by 2030. Put into perspective, that's more than the total power usage of smaller nations like Singapore or Greece.

Dutch company ASML is currently the world's only manufacturer of EUV tools, which require substantial investment and effort to integrate into chipmaking operations. Fabs using EUV systems for high-volume manufacturing can be found in countries such as Taiwan, South Korea, Japan, the US, Germany, and Ireland.

Current-generation EUV tools consume up to 1,170 kilowatts, while next-generation High NA EUV scanners are expected to reach power consumption levels of around 1,400 kilowatts.

TechInsights currently lists 31 fabs employing EUV machines for their chipmaking operations, with an additional 28 expected to come online by the end of 2030.

While EUV tools consume a significant amount of electricity, they account for only about 11 percent of the total energy consumption of an entire chip fab. By 2030, the 59 chipmaking plants equipped with EUV capabilities are projected to consume a staggering 54,000 gigawatts annually, which is 19 times the electricity needed to power the Las Vegas Strip.

Tesla AI FSD Will Vastly Improve in November

Tesla AI has updated its FSD (full self driving) achievements. There will be huge progress towards nearing human level driving. Tesla expects to surpass human

Solving robotaxi and full self driving is key to Tesla’s future. Version 13.X should get them from 12% FSD adoption (% of people who buy a Tesla who get FSD) in the US to 25% adoption worldwide (China, Europe etc…) it will be smooth and comfortable. Everyone else is at 12.1 or v 11 or worse. Everyone other company self driving system is 600X worse than Version 13.

#tesla #ai #fsd #technology #robotaxi

Lots of work, but the team’s aiming to get to feature complete for unsupervised FSD with the v13 series! https://t.co/zqm1PUcGP4

— Ashok Elluswamy (@aelluswamy) October 31, 2024

Tesla is transition to superhuman driving. This is insanely hard for Tesla. Tesla FSD barely works for version 3 hardware. It really needs hardware 4. Every other car maker needs version 4 hardware or better installed on many cars to replicate and they need an AI training cluster.

Going from V 12.1 to version 13 needed 2 billion miles of training data. Waymo and Apollo go and Xpeng have started to follow. They are at best at 12.1 level and most experts have said the Chinese self driving systems are at Tesla FSD version 11 level. Tesla used 500,000 cars driving an average 4000 miles using FSD in 2024 to improve the AI. If another company had 5000 cars training then they would need 400,000 miles driven. This would take 4 years of cars permanently on the road with human taxi driver.

100,000 cars driving 20,000 miles or 50,000 cars driving 40,000 miles. This means that Tesla is a minimum of 4-6 years ahead of any possible competitor to get to where Tesla will be in November. Tesla will improve 1000 times in miles driven per intervention in 2025. Catching up to Tesla’s constant improvement will be impossible if Tesla licenses to car makers building over half of the cars in the world.

Tesla FSD should go global in Q1 2025 with a high adoption rate. Tesla could reach 5 million cars with real life usage of FSD by mid 2025. This would be for highway and city driving (aka all driving). This means about 1000 miles per month per car. This would mean 5 billion miles per month of training data. This is going from 1 billion miles per month by December 2024 to about 2-3 billion miles per month in April 2025 and then 5 billion miles per month in July 2025. This could be 40 billion FSD driving miles in 2025.

A 5-6X improvement in interventions would be greater than a 3X leap from a new teenage human driver and an experienced human driver.

Tesla has integrated several of these improvements and are already seeing a 4x increase in miles between necessary interventions compared to v12.5.4. This is likely a 12.5.6 or version 13.0 is already showing a 4X increase in miles per intervention.

50,000 FSD customers have end to end highway for FSD. This is FSD that replaces the Autopilot that was already ten times safer than human driving.

FSD version 13 is releasing to Tesla employees and a FSD Version 13.3 will go wide to AI4( Hardware 4 users around the end of November. Tesla FSD Version 13 appars be a day later than the end of October (today) with a November 1st or 2nd release.

There is actually smart summon that will to to European and Asian customers. This means driverless operation within line of sight to a parked Tesla.

As October comes to a close, here's an update on the releases

What we completed:
– End-to-end on highway has shipped to ~50k customers with v12.5.6.1
– Cybertruck build that improves responsiveness
– Successful We, Robot event with 50 autonomous Teslas safely transporting over… https://t.co/2xKiAjrk5R

— Tesla AI (@Tesla_AI) October 31, 2024

Tesla FSD Improvement List
– Full rollout of end-to-end highway driving to all AI4 users, targeted for early next week, including enhancements in stop smoothness, less annoying bad weather notifications, and other safety improvements
– Improved v12.5.x models for AI3 city driving
– Actually Smart Summon release to Europe, China and other regions of the world
– v13 is a package of following major technology upgrades:
– 36 Hz, full-resolution AI4 video inputs
– Native AI4 inputs and neural network architectures
– 3x model size scaling

– 3x model context length scaling
– 4.2x data scaling
– 5x training compute scaling (enabled by the Cortex training cluster)
– Much improved reward predictions for collision avoidance, following traffic controls, navigation, etc.
– Efficient representation of maps and navigation inputs
– Audio inputs for better handling of emergency vehicles
– Redesigned controller for smoother, more accurate tracking
– Integrated unpark, reverse, and park capabilities
– Support for destination options including pulling over, parking in a spot, driveway, or garage
– Improved camera cleaning and handling of camera occlusions

#tesla has been testing their ride hail program in the San Francisco area for the past year. This is a service that is only available to employees.

It is something overlooked in the #tobotaxi race.

Microsoft CEO Just Unveiled Autonomous AI Agents: The Future of AI is Here

Quantum Machines and Nvidia use machine learning to get closer to an error-corrected quantum computer

About a year and a half ago, quantum control startup Quantum Machines and Nvidia announced a deep partnership that would bring together Nvidia's DGX

About a year and a half ago, quantum control startup Quantum Machines and Nvidia announced a deep partnership that would bring together Nvidia’s DGX Quantum computing platform and Quantum Machine’s advanced quantum control hardware. We didn’t hear much about the results of this partnership for a while, but it’s now starting to bear fruit and getting the industry one step closer to the holy grail of an error-corrected quantum computer.

#nvidia #quantum #error #dgxquantum #technology

The future of immersive technology is a mix of reality and fiction

Have you ever used virtual reality glasses? See, I said virtual reality, which is totally different from our everyday lives. This equipment, specifically, is huge glasses that we wear and completely escape the real world. We stop seeing the actual environment and are transported to a 100% different digital universe, which could be a game, a movie theater or anything else like that. This technology is already quite old, as crazy as that statement may sound.

#technology #future

Then, the technology virtually inserts a monitor on the living room table where a YouTube video is playing, digitally pastes a grocery shopping list on the refrigerator door and opens a portal to a gaming application on the bedroom wall, for example. . No matter how much you walk around the house, all these digital insertions remain there, as if they were physical.

Hi, @coyotelation,

This post has been voted on by @darkcloaks because you are an active member of the Darkcloaks gaming community.


Get started with Darkcloaks today, and follow us on Inleo for the latest updates.

The first prototypes appeared in the 60s and 70s, can you believe it?! And although they have become more common, especially in the last 10 years, it cannot be said that virtual reality glasses were a type of product that became popular.

That's why, for the masses, virtual reality has become a bit of a luxury game. A cool and different experience to have no more than half a dozen times in your life. And this, from a business point of view, limits the growth and development of a promising industry around this technology.

But, in recent years, the concept has reinvented itself and grown considerably. With the evolution and miniaturization of chips, virtual reality glasses proved to be a first step towards what we now know as mixed reality. Yes, this is already more attractive for more intense use.

It is mixed because, when we wear today's new glasses, we continue to see the world around us. If we put mixed reality glasses on our face in the living room, we continue to see the living room because this equipment has external cameras that capture the scene where we are and reproduce it on the screens that are glued to our eyes inside. of glasses.

They are very impressive and seem to be truly revolutionary products that completely change the way we understand reality. But the Vision Pro, launched in February this year, has sales well below what Apple expected and one of the reasons is still the price. The glasses cost an incredible US$3,500, which, in direct conversion, without counting import taxes, costs around R$20,000… you can buy an old Celta with 200,000 kilometers on the clock.

They are far superior, from a technology and usability point of view, compared to the Quest. But these Meta glasses have entry-level versions that start at US$299 and offer an experience that, despite being much inferior, is still conceptually similar. With the price of a single Apple Vision Pro, you buy 11 units of the cheapest Meta Quest and still have more than US$200 left over to buy apps and games.

Google Maps recebe uma das maiores atualizações de sua história graças ao Gemini — Waze também recebe mudanças

O Google anunciou novos recursos de IA generativa para o Maps e o Google Earth, além de um pequeno toque no Waze. O Gemini é o principal componente da IA do Google, e trazê-lo para o Maps era praticamente obrigatório. Muito em breve, poderemos desfrutar de sua integração no Maps.

#google #ai #technology

O primeiro novo recurso apresentado pelo Google tem a ver com essa integração, que visa nos ajudar na elaboração de planos. Por exemplo, poderemos perguntar ao Maps quais planos podemos fazer com os amigos à noite, e o aplicativo nos mostrará bares, restaurantes e locais de interesse para irmos.

Se quisermos mais informações sobre esses lugares, podemos pedir ao Gemini que nos informe mais. Essas funções começarão a ser implementadas primeiro nos Estados Unidos nesta semana. Ainda não há informações sobre quando estarão disponíveis em outros lugares. O Google afirma que, além dessas sugestões, o Gemini também será integrado às avaliações, para resumi-las por meio de IA.

O Google vai melhorar o modo de navegação do Maps para facilitar a compreensão de onde estamos dirigindo. Um dos principais problemas desse aplicativo, historicamente, é que nem sempre fica claro qual faixa devemos pegar para a próxima saída - algo que o Waze ou o Petal Maps costumam fazer muito melhor. Agora, o Maps nos mostrará, por meio de um indicador azul, exatamente em qual faixa devemos sair.

Além disso, há melhorias importantes. O Maps nos mostrará estacionamentos próximos, direções do nosso carro até o local que marcamos e até mesmo informações sobre incidentes climáticos.

Por fim, a empresa anunciou a maior atualização até o momento para o Immersive View. Ele agora oferecerá mais detalhes, indicações de onde podemos estacionar na área e até mesmo indicações se houver alguma curva difícil ao longo da rota. Esse novo recurso será implementado esta semana no Android e no iOS na Espanha.

Por fim, o Google anunciou novos recursos de IA generativa no Google Earth. O Gemini poderá exibir informações mais detalhadas sobre as cidades, respondendo a perguntas como “Você pode mostrar em um mapa os cinco códigos postais que têm o menor número de estações de recarga em relação à sua área geográfica?". Esses recursos serão disponibilizados em testes-piloto no próximo mês, portanto, ainda há algum caminho a percorrer antes de estar totalmente funcionando.

Devagarinho, startups de inteligência artificial vão se infiltrando em Hollywood

Keanu Reeves já lutou contra máquinas nas telas do cinema. Agora, suas atuações é que estão servindo como material de aprendizado para as máquinas.

#technology #ai

Em setembro, a Lionsgate fechou um acordo com a Runway, permitindo que a empresa de inteligência artificial treinasse um novo modelo de IA generativa em sua extensa biblioteca de filmes e programas de TV, incluindo franquias de sucesso como “John Wick”, “Jogos Mortais” e “Jogos Vorazes”.

O vice-presidente do estúdio, Michael Burns, acredita que a parceria vai economizar “milhões e milhões de dólares”, ajudando os cineastas nos processos de pré e pós-produção.

Esse acordo sinaliza uma mudança na forma como os grandes estúdios encaram o papel da inteligência artificial na produção de filmes. Ela também surge em um momento decisivo para Hollywood e a IA. Como várias outras empresas, a Runway enfrenta desafios legais e reivindicações de violação de direitos autorais em relação ao seu sistema de geração de imagens.

Enquanto os grandes estúdios começam a avaliar discretamente o potencial da inteligência artificial, uma onda de startups e empresas menores está desenvolvendo ferramentas destinadas a aprimorar a criatividade de Hollywood.

Essas soluções locais, criadas por profissionais do setor que entendem os desafios intrínsecos da produção cinematográfica e televisiva, têm como objetivo resolver dificuldades específicas do processo criativo e, ao mesmo tempo, preservar o toque humano que define uma boa narrativa.

O sucesso final dessas ferramentas (e sua aceitação por parte dos profissionais de criação que desconfiam da presença da IA) continua sendo uma questão em aberto em um setor que enfrenta rápidas mudanças tecnológicas.

Uma dessas ferramentas vem de Ryan Turner, diretor de criação da empresa de produção Echobend. Ele lançou um projeto para transformar roteiros em áudio usando IA generativa com a tecnologia da startup ElevenLabs.

O projeto foi inspirado em um problema que Turner costuma enfrentar: tem de 15 a 50 roteiros para analisar e não consegue encontrar tempo para ler durante o dia de trabalho. Ele lia mais em casa, mas tinha dificuldade para continuar olhando para a tela.

Então, imaginou que uma versão em áudio dos roteiros, para ouvir durante o trajeto para o trabalho ou nas sessões de ginástica, poderia ser uma solução viável. E sabia que não era o único a lidar com esse gargalo: o prazo de retorno sobre a avaliação de roteiros costuma ser de semanas, ou até mais. Turner viu aí uma oportunidade.

A solução da Echobend, que utiliza mais de 30 vozes, agiliza a narrativa de forma semelhante a um drama de rádio, alternando as vozes dos personagens – embora com certas restrições.

“Se houver uma cena cômica, o programa não vai conseguir acertar o tom”, diz Turner. “Não vai ser uma leitura perfeita, mas você vai entender que era uma piada.”

As startups também estão direcionando suas ferramentas de IA para outras partes do processo de produção. A Lore Machine, uma ferramenta de visualização, permite que os escritores façam o upload de seus roteiros e gerem uma galeria de imagens com personagens e locais consistentes.

Thobey Campion, fundador da startup, prevê um futuro no qual os roteiristas poderão criar e distribuir sua própria mídia digital, possivelmente mantendo mais controle sobre seu trabalho. “O que vimos em Hollywood é que os primeiros a adotar a tecnologia são, na verdade, os escritores”, diz Campion.

Em vez de gerar personagens do zero para cada cena, a Lore Machine utiliza sua biblioteca com mais de três mil poses pré-construídas. Essas poses são modelos 3D mapeados para pontos-chave como mãos, cotovelos e ombros.

Quando um personagem precisa aparecer em uma nova cena, o sistema seleciona uma pose apropriada e aplica o estilo artístico escolhido sobre ela. Essa abordagem ajuda a garantir que os personagens mantenham sua aparência e proporções em diferentes imagens e cenas.

O processamento de texto é igualmente crucial para o sistema. A Lore Machine usa uma combinação de modelos de linguagem pequenos e especializados, que trabalham em conjunto com modelos maiores e mais gerais. Esse trabalho em equipe permite que a inteligência artificial compreenda e visualize melhor os elementos do roteiro.

A Lore Machine e a Echobend não estão sozinhas na tentativa de aplicar a IA generativa às margens da produção cinematográfica. Startups e ferramentas como OneDoor, Charismatic.ai e Storyboarder.ai prometem ajudar na criação de roteiros.

As três palavras que os mentirosos mais repetem, segundo a inteligência artificial

Você sabe quais são as três palavras que os mentirosos mais repetem? Embora o uso delas não garanta que alguém esteja mentindo, sua frequência é um sinal de alerta

#technology #ai #hivebr

O hábito de repetir determinadas palavras em uma conversa pode indicar um comportamento enganoso. E segundo a inteligência artificial (IA), existem três palavras que os mentirosos mais repetem, você sabe quais são?

Embora o uso dessas palavras não garanta que alguém esteja mentindo, sua frequência é um sinal de alerta.

Com o avanço da inteligência artificial, algoritmos modernos conseguem avaliar milhões de conversas e identificar termos ou expressões que, de acordo com padrões linguísticos, podem indicar uma tendência à mentira.

Conforme a IA, as três palavras mais comuns em discursos enganosos são “realmente”, “nunca” e “honestamente”. Embora pareçam termos comuns, o contexto em que são utilizados pode denunciar uma tentativa de manipulação.

Frequentemente usada para reforçar uma declaração, esta palavra é vista como uma estratégia para tornar uma afirmação mais convincente.

Pessoas que mentem tendem a buscar ênfase em suas declarações para ganhar a confiança do interlocutor, mesmo sabendo que o que dizem é falso. Segundo especialistas, o uso de “realmente” é comum porque cria uma impressão de sinceridade.

Quando mentirosos usam esta palavra, buscam afirmar algo com total certeza, eliminando qualquer dúvida.

No entanto, as afirmações absolutas muitas vezes geram suspeita, pois é raro que algo nunca tenha acontecido. A IA detecta que o uso de “nunca” frequentemente indica uma reação defensiva para garantir que não restem dúvidas sobre o que está sendo dito.

Esta é uma das palavras que os mentirosos mais repetem, por serem usadas por quem deseja parecer franco e confiável. Ao iniciar uma frase com “honestamente”, a pessoa tenta projetar uma imagem de transparência.

Especialistas notam que, paradoxalmente, quando alguém repete frequentemente “honestamente” ou “para ser honesto”, isso pode indicar que algo em sua narrativa não é completamente verdadeiro.