You are viewing a single comment's thread from:

RE: LeoThread 2024-09-02 09:39

in LeoFinance6 months ago

Here is the daily technology #threadcast for 9/2/24. We aim to educate people about this crucial area along with providing information of what is taking place.

Drop all question, comments, and articles relating to #technology and the future. The goal is make it a technology center.

Sort:  
There are 2 pages
Pages

Kennedy Space Center, Florida, USA —
A new launch date for SpaceX's Polaris Dawn Mission is set after crews were delayed several times due to system checks, a helium leak, and most recently, unfavorable weather.

SpaceX filed a new launch schedule with the Federal Aviation Administration, with the first window opening at 3:38 a.m. on Wednesday, Sept. 4.

Why There's No One-Size-Fits-All Theory of Everything

Physicists often try to come up with theories that explain everything, but the universe isn't that simple! Imagine the universe as a patchwork of different worlds, each unique and not always connected. Thinking we can sum up all of reality with one theory or equation misses the point. Truth is all about context, and trying to boil everything down to one idea strips away the richness and variety of our universe. The beauty lies in the complexity, like a mosaic made of countless pieces, each adding something special to the bigger picture. So, let's celebrate the messy, diverse nature of reality instead of looking for a tidy explanation that doesn't really fit.

> S👁️URCE <


Could this be grandpa @bradleyarroww?

The guy in the gif is far too young to be Bradley.

No, he's not.

Sure it is. Bradley is very old. He was born over 90 years ago. I admit he does look good for his age.

Grandpa is still young, not up to that age. It's the mom that is up to that

I don't own a DELL 😜

Who does? LOL

Probably Adell? Adellheid?

I thought that was your older self 🤣🤣🤣..

😂 nah
I'm pretty sure Bradley is older than me.

Bradley is older than even your uncles oldest shoe.

I think so, but who knows about @taskmaster4450le 🤔🤔🤔

As far as I know and seeing all his wisdom and knowledge I would say he is from the last century. 😂

What!!! 😂😂😂

I am old too..but not as old (and a lot more handsome) than @bradleyarrow.

@bradleyarrow is not that old 🤣🤣🤣🤣, he is beautiful 🤩🤩🤩🤩..

lmao

I just that wasn't your older self neither @taskmaster4450le. So @luchyl needs to get another GIF so we can examine it... Lol

@luchyl is a rebel rouser.

That is part of her being a vixen.

I'm nothing close to what you call me. 😂

You love what I call you...it excites you.

On the contrary . 🥺 But what can I do since you won't stop.

No I see 🙈🙈🙈🙈🙈 when I close my eyes, I see more... Lol

lol, agreed

Oh yeah 😂😂😂.

Get one for me.

You started this war, not me

THis threadcast is above 100 comments now, just waiting for @coyotelation to set up his threadcast so that we can pull it as planned...

I guess he is on holiday.

I messaged him a few minutes ago and I saw his threads...

Did you call him all kind of mean names?

No no, I just messaged him from the wormholes 😂.

Do they charge data rates from there?

They do all you desire..

The case against AI art

No matter who powerful generative AI becomes, writer Ted Chiang says it will never create true art.

No matter who powerful generative AI becomes, writer Ted Chiang says it will never create true art.

Chiang is one of the most admired science fiction authors writing today, best known for the novella “Story Of Your Life” (which was adapted into the movie “Arrival”). But he’s also published terrific pieces for The New Yorker looking at the dangers and shortcomings of AI.

#technology #ai #art

You should really read his latest article in its entirety, but briefly: Chiang argues that the potential of large language models remains “largely theoretical” — thus far, generative AI has been most successful at “lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning.”

Even as LLMs improve, Chiang argues that their output will never be art — which he acknowledges is “notoriously hard to define,” but he tries anyway: “Art is something that results from making a lot of choices.” Sure, those choices might not result in a particularly good novel or painting or film, but you’re still “engaged in an act of communication between you and your audience.”

“We are all products of what has come before us, but it’s by living our lives in interaction with others that we bring meaning into the world,” Chiang concludes. “That is something that an auto-complete algorithm can never do, and don’t let anyone tell you otherwise.”

Palantir’s CTO, and 13th employee, has become a secret weapon for Valley defense tech startups

Palantir CTO Shyam Sankar is determined to help Palantir become a driving force for defense tech startups.

On August 5, Palantir’s CTO Shyam Sankar stood in front of about 20 nervous new employees at the company’s Washington, D.C. office and gave a speech you’d expect at a new hire meeting: company mission, Palantir’s history, etc. But there was one part that would’ve seemed unfathomable a few years ago: Sankar evangelized the importance of a new wave of defense tech startups, spun up by Palantir, Tesla and SpaceX alums.

#technology #defense #government

The significance was more ideological than financial. Any business Palantir gets from startups is dwarfed by its government contracts, after all. But you can’t put a price on philosophical bedfellows.

Palantir likes to remind the world that it’s not like other publicly traded companies, i.e. buttoned-up and appropriately distanced from its cowboy private days. Sankar ends his orientation by cheekily inviting the new employees to shout “f–k off” at him — a way, he says, to encourage a flat structure. And as he walks out of the event room into the office bullpen, he passes a sign referring to employees as “founders” and “trailblazers.”

The sign is fitting: Sankar, who’s been at the company for over 18 years, is determined to help Palantir become a driving force for defense tech startups, a sector that’s been flooded with over $129.3 billion in venture capital since 2021, according to PitchBook.

“To have this class of new champions who have all cut their teeth in Tesla and SpaceX and see the world in completely different ways, it’s providing a huge amount of energy internally for us as we build for them,” he said, referring to startups like Apex Space and Castelion, whose founders hail from those companies.

That is why, in late 2023, he started a program that provides guidance and tools to defense tech startups called First Breakfast. He referred to it in his writings as Palantir’s “Amazon.com to AWS moment.” Basically, it’s a play for Palantir to get in at the ground level for the next Palantirs. It’s a business strategy, but also a philosophical one for Sankar, who spends hours a week on the phone consulting with defense startups and venture capitalists.

Like his longtime boss Palantir CEO Alex Karp, Sankar likes to wax poetic about protecting Western values and how America’s industrial base stumbled after its WWII glory days (though perhaps prime defense contractors like Lockheed Martin and RTX, formerly known as Raytheon Technologies, would disagree that the industry has been resting on its laurels for 80 years). Sitting in a conference room lined with white boards, and a constant stream of commercial planes flying over the Potomac River behind him, he reiterated his darkest fears: that America may not be ready for whatever great conflict comes next.

'Emotion AI' may be the next trend for business software, and that could be problematic

As businesses experiment with embedding AI everywhere, one area starting to gain more attention is Emotion AI.

As businesses experiment with embedding AI everywhere, one unexpected trend is companies turning to AI to help its many newfound bots better understand human emotion.

It’s an area called “emotion AI,” according to PitchBook’s new Enterprise Saas Emerging Tech Research report that predicts this tech is on the rise.

#ai #technology

The reasoning goes something like this: If businesses deploy AI assistants to execs and employees, make AI chatbots be front-line salespeople and customer service reps, how can an AI perform well if it doesn’t understand the difference between an angry “What do you mean by that?” and a confused “What do you mean by that?”

Emotion AI claims to be the more sophisticated sibling of sentiment analysis, the pre-AI tech that attempts to distill human emotion from text-based interactions, particularly on social media. Emotion AI is what you might call multimodal, employing sensors for visual, audio, and other inputs combined with machine learning and psychology to attempt to detect human emotion during an interaction.

Major AI cloud providers offer services that give developers access to emotion AI capabilities such as Microsoft Azure cognitive services’ Emotion API or Amazon Web Services’ Rekognition service. (The latter has had its share of controversy over the years.)

While emotion AI, even offered as a cloud service, isn’t new, the sudden rise of bots in the workforce give it more of a future in the business world than it ever had before, according to PitchBook.

“With the proliferation of AI assistants and fully automated human-machine interactions, emotion AI promises to enable more human-like interpretations and responses,” writes PitchBook’s Derek Hernandez, senior analyst, emerging technology in the report.

“Cameras and microphones are integral parts of the hardware side of emotion AI. These can be on a laptop, phone, or individually located in a physical space. Additionally, wearable hardware will likely provide another avenue to employ emotion AI beyond these devices,” Hernandez tells TechCrunch. (So if that customer service chatbot asks for camera access, this may be why.)

Why do so many home robots still suck?

Home robots’ unfulfilled potential is neither because of lack of demand on the part of consumers nor lack of effort from manufacturers.

“The Jetsons” debuted September 23, 1962. The first episode, titled “Rosey the Robot,” was an origin story of sorts for the titular character, describing how an overworked Jane hired the housekeeper. Sixty-two years after her debut, Rosey remains an important pop cultural touchstone for the unfulfilled promise of home robots.

#technology #Robots #home

Home robots’ unfulfilled potential isn’t because of a lack of consumer demand or lack of effort from manufacturers. It’s more complicated and nuanced than that, though ultimately it’s a matter of pricing, functionality and efficacy. Outside of the aforementioned use cases, today’s home robots don’t do enough or do what they do well enough, and building a robot that can tick both of those boxes would prove prohibitively expensive for those of us who can’t afford our own islands.

Vacuums make for good home robots

During his long tenure as iRobot CEO, co-founder Colin Angle was fond of saying that he didn’t become a successful roboticist until he became a vacuum salesman. It’s a fun quip that gets to something much deeper about the industry. Before the Roomba came along, the company had experimented with everything from baby dolls to military equipment.

iRobot found success when it focused on a simple task: cleaning floors. The earliest models were primitive by today’s standards, but they got the job done well enough to justify their price point. In addition to marking 62 years since Rosey’s TV debut, next month is also the Roomba’s 22nd anniversary. The robot vacuum is old enough to legally buy a case of Sam Adams.

In the nearly quarter century since the Roomba launched, much of iRobot’s R&D has gone into making the system smarter, adding sensing, mapping, and AI and integrating with smart assistants. The company has invested into other robotics categories as well, including gutter clearing, pool cleaning and a lawnmower that may never see the light of day, but all have failed to recapture the Roomba’s magic.

WazirX customers unlikely to recover full funds after $230M heist

Customers of WazirX, the Indian cryptocurrency exchange that suffered a $234 million hack in July, are unlikely to recover their funds in full through the Customers of WazirX, the exchange that suffered a $234 million hack in July, are unlikely to recover their funds in full through the ongoing restructuring process.

Customers of WazirX, the Indian cryptocurrency exchange that suffered a $234 million hack in July, are unlikely to recover their funds in full through the ongoing restructuring process, a company legal adviser said on Monday.

#technology #wazirx #india

George Gwee, a director at restructuring firm Kroll working with WazirX, said that at least 43% of the money any customer had in WazirX is unlikely to be recovered. Last week, WazirX asked a Singapore High Court for six months’ protection while it restructures its liabilities.

The best case scenario is a return of “anywhere between 55% and 57% of the funds,” Gwee told journalists. Under a restructuring, WazirX’s priority will be to distribute remaining token assets to users in a pro-rata manner via crypto, it said. The company is holding discussions to share profit from its revenue-generating products, it added, without giving any concrete details.

The hack, which occurred in July, is the largest cryptocurrency theft in India to date and has sent shockwaves through the country’s crypto industry. The company has been scrambling to find ways to return money to its customers ever since. In July, it proposed that it would socialize the “force majeure” loss.

During the press conference, WazirX representatives avoided many questions about its efforts to shore up capital. The company said it’s holding talks with an unnamed white knight, but said the capital won’t be raised against equity because of an ongoing dispute with Binance, the world’s largest cryptocurrency exchange — Binance and WazirX have been closely linked at some point.

In late 2019, Binance announced the acquisition of WazirX in a blog post, but later disputed that it had acquired the firm. After a public row, the cryptocurrency exchange terminated its tech offerings to the Indian firm in 2022.

Validus, a Singapore-based digital SME lending platform, secures $50M debt financing to help enterprises in Indonesia

Validus, a Singapore-based digital lending platform for small and medium businesses, has secured $50 million in debt financing from HSBC under the ASEAN

Validus, a Singapore-based digital lending platform for small and medium businesses, has secured $50 million in debt financing from HSBC under the ASEAN Growth Fund strategy.

Validus will use the proceeds to support the financial inclusion of micro, small and medium enterprises (MSMEs) in Indonesia, addressing the challenges they face in accessing financial resources.

With 64.2 million MSMEs contributing 61% of Indonesia’s GDP, according to Indonesia’s Ministry for Economic Affairs, the potential for growth is immense. These MSMEs employ about 119.6 million people, which is 97% of the total workforce in the country. However, only about 17.5 million MSME players are tapping into the online ecosystem and e-commerce. Indonesian MSMEs face significant challenges in accessing financing, mainly due to commercial banks’ stringent operational, reporting, and collateral requirements, as per a 2017 report by the World Bank. Despite government initiatives, only around 20 percent of bank loans go to MSMEs, the World Bank report said.

Vikas Nahata (Executive Chairman) and Nihkilesh Goel (CEO) co-founded the business in Singapore in 2015. They developed a supply chain-focused lending model that utilized non-traditional data access through partnerships with traditional banks and international institutions. The company has since expanded to include Indonesia (Batumbu), Thailand (Siam Validus), and Vietnam (Validus Vietnam).

“Traditional banks across the SEA region still rely on legacy credit evaluation methods for small businesses, and they are overly reliant on historical financials and real estate-backed collateral,” Goel said. “For a region experiencing GDP growth of 5-6% per annum, small businesses need access to stable and accessible working capital to grow their businesses and contribute to job creation and nation building. This is where Validus plays a major role as the largest digital SME financing provider across ASEAN.”

Its users are MSMEs, who primarily borrow for their short-term working capital needs, Goel told TechCrunch, while lenders include major international institutions (Citi, HSBC, FMO, Credit Saison, OikoCredit) and leading local banks (CIMB Niaga, Bank Mandiri) across Indonesia and Thailand. Goel mentioned that one of its differentiators is over 100 unique partnerships throughout the Southeast Asia region.

“Validus is the largest SME financing marketplace across the South East Asia region by outstanding loan book or monthly loan disbursals where we are currently averaging $150 million of new loan disbursals per month,” Goel said.

Example of an exponential curve:

Apple may ditch those old familiar USB-A ports in the new Mac mini

The Mac mini will be the next Apple device to say goodbye to USB-A, according to Bloomberg’s Mark Gurman.

The Mac mini will be the next Apple device to say goodbye to USB-A, according to Bloomberg’s Mark Gurman.

Apple customers have probably gotten used to seeing the familiar, rectangular USB-A ports replaced with their thinner USB-C siblings. And while USB-C has its advantages, the transition can sometimes leave users confused and scrambling for adapters.

#apple #mac #usb #tecnhology

The Mac mini, meanwhile, hasn’t been significantly redesigned since 2010, but Gurman says that’s changing this fall. The updated models will include Apple’s new M4 chip, as well as five USB-C ports, plus an Ethernet port, an HDMI connector, and a headphone jack. But the USB-A ports will be gone.

Apple has a “Glowtime” press event scheduled for September 9, where the company is expected to announce the new iPhone 16. The new Mac lineup may not be announced until later in the fall.

Bolt reportedly threatens legal action against Silverbear Capital

In the latest twist in Bolt's aggressive fundraising efforts, the fintech company’s CEO appears to have made a veiled threat of legal action

In the latest twist in Bolt’s aggressive fundraising efforts, the fintech company’s CEO appears to have made a veiled threat of legal action against Silverbear Capital, the investment bank whose involvement in the deal remains in some dispute.

“We believe there was some internal miscommunication at Silverbear Capital, one of our lead investors, which has caused unnecessary confusion,” CEO Justin Grooms wrote in an email reportedly viewed by Forbes. “The fact is, they signed a binding term sheet committing $200 million. Our exceptional legal team at Gibson, Dunn & Crutcher stands ready to represent the company in seeking to enforce our rights vigorously.”

Bolt, which offers tools for one-click e-commerce checkout, did not immediately respond to a request for comment. Silverbear partner Veronica Welch told Forbes that “this never had anything to do with any miscommunications” and that the deal “was never discussed or approved in the company.”

Earlier this month, a leaked term sheet showed that Bolt was aiming to raise $200 million in equity funding and $250 million in “marketing credits” at a $14 billion valuation, with an unusual pay-to-play deal structure that would essentially force existing backers to either invest or lose their stakes in the company.

While Silverbear was initially reported to be leading the equity round, the firm’s partner Brad Pamnani recently told TechCrunch he’s actually putting the deal together through a special purpose vehicle (SPV) managed by a private equity fund based in the United Arab Emirates.

“At the beginning, I used my Silverbear email to respond to some things and that caused some confusion but Silverbear was never actually looking at this deal,” Pamnani said.

Meanwhile, The London Fund’s CEO confirmed in an interview with TechCrunch that the firm is contributing “marketing credits” to the deal. However, The London Fund released a statement Friday saying that it has not seen and cannot “confirm the validity of any part of a document leaked to the press.”

“We can confirm that there have been discussions between The London Fund and Bolt’s management; however, at no point have we stated that a transaction has concluded,” the firm said.

From Groq:

What is Kennedy Space Center?

The Kennedy Space Center (KSC) is a NASA facility located on Merritt Island, Florida, United States. It is one of the most famous and historic space centers in the world, and it has played a crucial role in the United States' space exploration program since the 1960s.

The Kennedy Space Center was established in 1962 as the launch site for NASA's Mercury and Gemini programs, which aimed to put humans into space and orbit the Earth. The center was named after President John F. Kennedy, who played a key role in the development of the US space program.

Over the years, the Kennedy Space Center has been the launch site for many historic space missions, including:

  1. Mercury-Redstone 3: The first American in space, astronaut Alan Shepard, launched from KSC on May 5, 1961.
  2. Gemini 4: The first American spacewalk was performed by astronaut Ed White during this mission in 1965.
  1. Apollo 7: The first manned Apollo mission, which carried astronauts Wally Schirra, Donn Fulton Eisele, and Walter Cunningham into space in 1968.
  2. Apollo 11: The first mission to land humans on the moon, which was launched from KSC on July 16, 1969.
  3. Space Shuttle program: KSC was the primary launch site for the Space Shuttle program, which operated from 1981 to 2011.

Today, the Kennedy Space Center is still an active NASA facility, although it has shifted its focus from launch operations to research, development, and testing of new space technologies. The center is also a popular tourist destination, with over 1.5 million visitors each year.

Some of the attractions and activities at the Kennedy Space Center include:

  1. Launch Complex 39A: A historic launch pad that has been restored and is NOW used for testing and development of new space systems.
  2. Apollo/Saturn V Center: A museum that showcases the Saturn V rocket that took astronauts to the Moon.
  3. Space Shuttle Atlantis: A retired Space Shuttle that is now on display at the center.
  4. Astronaut Encounter theater: A theater where visitors can meet and hear from NASA astronauts.
  5. Journey To Mars: An interactive exhibit that takes visitors on a virtual tour of the Red planet.

Overall, the Kennedy Space Center is a must-visit destination for anyone interested in space exploration, science, and technology.

What are humanoid robots?

Humanoid robots are robots that are designed to resemble humans in appearance and are capable of performing tasks that are similar to those of humans. They are typically designed to have a human-like body, with a torso, arms, and legs, and are often equipped with advanced sensors and actuators that allow them to move and interact with their environment in a way that is similar to humans.

Humanoid robots can be used for a variety of purposes, including:

  1. Research and development: Humanoid robots can be used to study human behavior, movement, and cognition, and to develop new technologies and techniques for human-robot interaction.
  2. Service and assistance: Humanoid robots can be used to assist people with disabilities, provide companionship and social interaction, and perform tasks that are difficult or impossible for humans to do.
  3. Entertainment: Humanoid robots can be used in theme parks, shows, and other forms of entertainment to create interactive and engaging experiences.
  4. Military and defense: Humanoid robots can be used for search and rescue, surveillance, and other military applications.

Some examples of humanoid robots include:

  1. Honda's ASIMO: A humanoid robot that is designed to assist people with daily tasks and to interact with humans in a natural way.
  2. Boston Dynamics' Atlas: A humanoid robot that is designed for search and rescue and other applications that require advanced mobility and manipulation capabilities.
  3. SoftBank Robotics' Pepper: A humanoid robot that is designed to provide customer service and assistance in retail and other settings.
  4. NASA's Valkyrie: A humanoid robot that is designed for space exploration and is being used to test new technologies and techniques for human-robot interaction.

Humanoid robots are typically characterized by their ability to:

  1. Walk and move in a human-like way
  2. Manipulate objects and perform tasks with their hands
  3. Communicate with humans through speech and gestures
  4. Sense and respond to their environment through sensors and actuators
  5. Learn and adapt to new situations and tasks

However, humanoid robots also have some limitations, such as:

  1. Limited strength and dexterity compared to humans
  2. Limited ability to understand and respond to complex social cues
  3. Limited ability to adapt to new situations and tasks without human intervention
  4. Limited ability to perform tasks that require high levels of precision and dexterity

Overall, humanoid robots are an exciting and rapidly evolving field that has the potential to revolutionize the way we live and work.

What companies are looking to build humanoid robots?

There are several companies around the world that are actively developing and building humanoid robots. Here are some examples:

  1. Honda: Honda's ASIMO is one of the most well-known humanoid robots, and the company is continuing to develop and improve its capabilities.
  2. Boston Dynamics: Boston Dynamics is a robotics company that is owned by Alphabet (Google's parent company). They are developing a humanoid robot called Atlas, which is designed for search and rescue applications.
  1. SoftBank Robotics: SoftBank Robotics is a Japanese company that is developing a humanoid robot called Pepper, which is designed for customer service and assistance.
  2. Sony: Sony is a Japanese Electronics company that is developing a humanoid robot called SDR-4X, which is designed for entertainment and service applications.
  3. UBTECH: UBTECH is a Chinese robotics company that is developing a humanoid robot called Walker, which is designed for service and assistance applications.
  4. Hanson Robotics: Hanson Robotics is a Hong Kong-based company that is developing a humanoid robot called Sophia, which is designed for entertainment and service applications.
  1. Robotis: Robotis is a South Korean robotics company that is developing a humanoid robot called THOR, which is designed for service and assistance applications.
  2. FANUC: FANUC is a Japanese robotics company that is developing a humanoid robot called CR-35iA, which is designed for manufacturing and logistics applications.
  3. KUKA: KUKA is a German robotics company that is developing a humanoid robot called LBR iiwa, which is designed for manufacturing and logistics applications.
  4. ABB: ABB is a Swiss robotics company that is developing a humanoid robot called YuMi, which is designed for manufacturing and logistics applications.
  1. Festo: Festo is a German robotics company that is developing a humanoid robot called Bionic Handling Assistant, which is designed for service and assistance applications.
  2. PAL Robotics: PAL Robotics is a Spanish robotics company that is developing a humanoid robot called TIAGo, which is designed for service and assistance applications.
  3. Robotex: Robotex is a Turkish robotics company that is developing a humanoid robot called RoboThespian, which is designed for entertainment and service applications.
  4. Geminoid: Geminoid is a Japanese robotics company that is developing a humanoid robot called Geminoid HI-1, which is designed for entertainment and service applications.
  1. HRP-4C: HRP-4C is a Japanese humanoid robot developed by the National Institute of Advanced Industrial science and technology (AIST) and the University of Tokyo.

These are just a few examples of companies that are developing humanoid robots. There are many other companies and research institutions around the world that are also working on humanoid robotics.

What is Tesla doing with humanoid robots?

Tesla is working on developing humanoid robots as part of its broader efforts to create a more autonomous and efficient manufacturing process. The company has been testing and refining its humanoid robot technology, known as "Optimus," which is designed to perform a variety of tasks, including assembly, welding, and material handling.

Tesla's humanoid robots are designed to be highly flexible and adaptable, allowing them to perform a wide range of tasks and interact with other machines and humans in a variety of settings. The robots are equipped with advanced sensors and actuators, as well as sophisticated software and algorithms that enable them to learn and adapt to new situations.

Tesla's humanoid robots are being developed to work alongside human workers in its manufacturing facilities, with the goal of improving efficiency, reducing labor costs, and increasing productivity. The robots are also being designed to be able to work in a variety of environments, including clean rooms, warehouses, and construction sites.

Some of the specific tasks that Tesla's humanoid robots are being designed to perform include:

  1. Assembly: Optimus is being designed to perform a variety of assembly tasks, including assembling batteries, motors, and other components.
  2. Welding: The robots are being equipped with welding tools and are being designed to perform welding tasks, such as welding together metal parts.
  3. Material handling: Optimus is being designed to handle and move materials around the manufacturing facility, such as transporting batteries or other components.
  4. Quality control: The robots are being equipped with sensors and cameras that allow them to inspect and test products, ensuring that they meet quality standards.
  5. Maintenance: Optimus is being designed to perform routine maintenance tasks, such as cleaning and lubricating machinery.

Tesla's humanoid robots are still in the testing and development phase, and the company is working to refine the technology and integrate it into its manufacturing processes. However, the potential for humanoid robots to revolutionize manufacturing and improve efficiency is significant, and Tesla is at the forefront of this technology.

Why Tesla is in good position for Humanoid Robots.

Nvidia's New Superchips: Twice the Power, Double the Challenge!

Nvidia’s latest Blackwell chips are game-changers. They're about twice the size of previous models and pack in 2.6 times more transistors, thanks to a clever design that combines two processors with memory components in a complex mesh. But making these chips isn’t easy—they’re so intricate that even a tiny flaw can ruin a $40,000 chip. Nvidia faced huge engineering challenges to get these right, and this article dives into what it took to make these cutting-edge chips a reality. If you’re into tech, this is a must-read!

> S👁️URCE <

Writers sue Anthropic for ‘stealing’ their work for AI training

Joining the ranks of Microsoft and OpenAI, Anthropic is now also facing a lawsuit from authors who claim that the creator of the Claude AI models used the writers’ work to train its LLMs unsolicited and, more importantly, without paying for it.

The trio of writers filed the suit in a court in the U.S. state of California. According to them, Anthropic abuses ‘human creativity and ingenuity’ by feeding the fruits of their labor (and that of others) without permission to the AI models it trains.

#ai #technology #data #openai #anthropic

“Humans who learn from books buy lawful copies of them, or borrow them from libraries that buy them, providing at least some measure of compensation to authors and creators,” the complaint reads. Anthropic has not responded apart from saying it is ‘aware’ of the suit, Reuters reports.

Motley crew of plaintiffs

The authors who sued Anthropic are Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson. All three write about very different things: thriller and mystery stories in Bartz’s case, non-fiction in Graeber’s, and Johnson’s published about the war in Iraq after 9/11 and then turned to a true-crime report about the theft of exotic bird feathers for the purpose of fly fishing(!)

Not only have book writers set their sights on Antropic. Music publishers have also sued the company because its AI models would regurgitate existing song lyrics based on users’ prompts. In other words, these have allegedly also been used for data entry.

Rival to OpenaI
Founded in 2021, Anthropic positions itself as a rival to OpenAI. Its models in the Claude series, including Sonnet, Opus, and Haiku, are said to be able to compete with the various models of its well-known San Francisco neighbour, including GPT-4o, GPT-3.5 Turbo, and -4o mini.

Sam Altman, the much-discussed founder of OpenAI and just about the only one of the original team of founders still left, claims that AI models cannot be properly trained without copyrighted data. That means books, newspaper articles, scientific papers and other data sources containing high-quality information.

When you use one of the major chatbots, even the results are feeding Big Tech as they have more data (synthetic in this case) to utilize.

At a minimum, take the results and post to threads. This gets the data onto the Hive blockchain, opening it up for anyone to use.

AI Models Scaled Up 10,000x Are Possible by 2030, Report Says

Recent progress in AI largely boils down to one thing: Scale.

Around the beginning of this decade, AI labs noticed that making their algorithms—or models—ever bigger and feeding them more data consistently led to enormous improvements in what they could do and how well they did it. The latest crop of AI models have hundreds of billions to over a trillion internal network connections and learn to write or code like we do by consuming a healthy fraction of the internet.

#ai #technology #scaling

It takes more computing power to train bigger algorithms. So, to get to this point, the computing dedicated to AI training has been quadrupling every year, according to nonprofit AI research organization, Epoch AI.

Should that growth continue through 2030, future AI models would be trained with 10,000 times more compute than today’s state of the art algorithms, like OpenAI’s GPT-4.

“If pursued, we might see by the end of the decade advances in AI as drastic as the difference between the rudimentary text generation of GPT-2 in 2019 and the sophisticated problem-solving abilities of GPT-4 in 2023,” Epoch wrote in a recent research report detailing how likely it is this scenario is possible.

But modern AI already sucks in a significant amount of power, tens of thousands of advanced chips, and trillions of online examples. Meanwhile, the industry has endured chip shortages, and studies suggest it may run out of quality training data. Assuming companies continue to invest in AI scaling: Is growth at this rate even technically possible?

In its report, Epoch looked at four of the biggest constraints to AI scaling: Power, chips, data, and latency. TLDR: Maintaining growth is technically possible, but not certain.

Can AI Scaling Continue Through 2030?

We investigate the scalability of AI training runs. We identify electric power, chip manufacturing, data and latency as constraints. We conclude that 2e29 FLOP training runs will likely be feasible by 2030.

In recent years, the capabilities of AI models have significantly improved. Our research suggests that this growth in computational resources accounts for a significant portion of AI performance improvements.1 The consistent and predictable improvements from scaling have led AI labs to aggressively expand the scale of training, with training compute expanding at a rate of approximately 4x per year.

#ai #training #compute #technology

To put this 4x annual growth in AI training compute into perspective, it outpaces even some of the fastest technological expansions in recent history. It surpasses the peak growth rates of mobile phone adoption (2x/year, 1980-1987), solar energy capacity installation (1.5x/year, 2001-2010), and human genome sequencing (3.3x/year, 2008-2015).

Here, we examine whether it is technically feasible for the current rapid pace of AI training scaling—approximately 4x per year—to continue through 2030. We investigate four key factors that might constrain scaling: power availability, chip manufacturing capacity, data scarcity, and the “latency wall”, a fundamental speed limit imposed by unavoidable delays in AI training computations.

Our analysis incorporates the expansion of production capabilities, investment, and technological advancements. This includes, among other factors, examining planned growth in advanced chip packaging facilities, construction of additional power plants, and the geographic spread of data centers to leverage multiple power networks. To account for these changes, we incorporate projections from various public sources: semiconductor foundries’ planned expansions, electricity providers’ capacity growth forecasts, other relevant industry data, and our own research.

We find that training runs of 2e29 FLOP will likely be feasible by the end of this decade. In other words, by 2030 it will be very likely possible to train models that exceed GPT-4 in scale to the same degree that GPT-4 exceeds GPT-2 in scale.2 If pursued, we might see by the end of the decade advances in AI as drastic as the difference between the rudimentary text generation of GPT-2 in 2019 and the sophisticated problem-solving abilities of GPT-4 in 2023.

From Epochai:

Given projected industry growth in production capacity, they think between 20 and 400 million AI chips may be available for AI training in 2030. Some of these will be serving up existing models, and AI labs will only be able to buy a fraction of the whole.

From Groq

What constraints are expected for AI scaling?

As AI systems continue to grow in complexity and scale, several constraints are expected to emerge, impacting their performance, efficiency, and overall effectiveness. Some of the key constraints expected for AI scaling include:

  1. Data Quality and Quantity: The quality and quantity of training data will become increasingly important. As AI models grow in size, they require more data to learn from, and poor-quality data can lead to biased or inaccurate results.
  1. Computational Resources: AI models require significant computational resources, including processing power, memory, and storage. As models grow in size, they will require more powerful hardware and specialized infrastructure to run efficiently.

  2. Energy Consumption: As AI systems scale, their energy consumption will increase, posing environmental and economic concerns. This will drive the development of more energy-efficient AI algorithms and hardware.

  1. Explainability and Transparency: As AI systems become more complex, it will be increasingly important to understand how they make decisions and why. This will require the development of explainable AI (XAI) techniques to ensure transparency and accountability.

  2. Interpretability and Debugging: As AI models grow in size and complexity, it will become more challenging to interpret and debug their behavior. New techniques will be needed to understand and correct errors in AI systems.

  3. Human-AI Collaboration: As AI systems scale, they will need to collaborate with humans more effectively. This will require the development of new interfaces and workflows that enable seamless human-AI collaboration.

  1. Cybersecurity: As AI systems become more connected and integrated, they will become more vulnerable to cyber threats. This will drive the development of more robust cybersecurity measures to protect AI systems and data.

  2. Regulatory Frameworks: As AI systems scale, regulatory frameworks will need to be developed to ensure their safe and ethical deployment. This will require collaboration between governments, industry, and academia to establish clear guidelines and standards.

  3. Scalability of AI Algorithms: AI algorithms will need to be designed to scale efficiently, handling large datasets and complex computations. This will require the development of new algorithms and techniques that can efficiently process and analyze large amounts of data.

  1. Edge Computing and IoT Integration: As AI systems scale, they will need to be integrated with edge computing and IoT devices to process data in real-time and make decisions at the edge. This will require the development of new architectures and protocols for edge computing and IoT integration.

  2. Distributed Computing and cloud Infrastructure: AI systems will require distributed computing and cloud infrastructure to scale efficiently. This will drive the development of new cloud computing platforms and distributed computing architectures.

  1. AI-Driven AI: As AI systems scale, they will need to be able to learn from each other and adapt to new situations. This will require the development of AI-driven AI systems that can learn from each other and improve over time.

By understanding these constraints, AI developers and researchers can design and develop more effective, efficient, and scalable AI systems that can address complex problems and improve people's lives.

Technology sometimes puts us in front of difficult choices, for example, when we see a perfect fruit on the counter, the safety of the fruit preservation technology for those who will eat it is questioned. :) !BEER

There are 2 pages
Pages