You are viewing a single comment's thread from:

RE: LeoThread 2024-09-09 11:48

in LeoFinance2 months ago

Here is the daily technology #threadcast for 9/9/24. We aim to educate people about this crucial area along with providing information of what is taking place.

Drop all question, comments, and articles relating to #technology and the future. The goal is make it a technology center.

Sort:  

Apple just announced …

… a pink Barbie iPhone 16

😁

Wow! Awesome. Now I'm curious to know the first man who would purchase the pink Barbie Iphone 16. 😁

The color goes with #bradleyarrow's underwear.

I will like to see this but it will be used by majorly females.

The total amount of #ai compute doubles every 6 months. If this holds true, in 3 years we will have 64 time the amount of AI compute as today.

What do you think will be possible with AI then?

And now that every regular home pc will come with NPU mucho more processing are coming, i coulndt tell if the rythm would stay the same going lower than 4nm is getting difficult

That is true. Going to see expansion in the inference field. Running neural nets is one thing but having to add the running of applications of this is going to be very interesting.

With the chips, there are other layers that are getting advancement, especially with data structure and algo improvements.

Indeed, is much more what can they do in terms of design and architecture than lowering the size, at least for now, China is doing some advances in his photolitography systems, but those process usually takes some good years each generatio

A lot of people think China just copies and steals tech. They are churning out a lot of engineers, of all kinds.

They are more than just stealing tech.

(off topic) i tag you in my new video did you check it? I have confused feelings about it haha

It wouldnt load when I tried it this morning on 3peask. I got the spinny disk.

I saw some pro chinese youtube channels, they say that inside China there are no intelectual property as we understand it, they compete among each other. But... Its hard to believe in the influencers, both sides invest a lot in propaganda

That is true about the propaganda.

However, China is likely going to surpass the US since we churn out lawyers and other degreed people while lagging on those with the science degrees. It is a major problem.

A conversation with Claude3:

How do we prepare for Artificial General Intelligence (AGI)?

Preparing for Artificial General Intelligence (AGI) is a complex and multifaceted challenge. Here are some key areas to consider:

  1. Research and development:
  • Invest in AGI research to better understand its potential capabilities and risks
  • Focus on developing safe and ethical AGI systems
  • Encourage interdisciplinary collaboration between AI researchers, ethicists, policymakers, and other relevant fields
  1. Safety and alignment:
  • Prioritize AI alignment research to ensure AGI systems' goals and values align with human values
  • Develop robust safety measures and control mechanisms
  • Explore techniques like reward modeling, inverse reinforcement learning, and corrigibility
  1. Ethics and Governance:
  • Establish international agreements and governance frameworks for AGI development
  • Create ethical guidelines for AGI research and deployment
  • Develop mechanisms for ongoing oversight and regulation
  1. Societal preparation:
  • Educate the public about AGI's potential impacts
  • Address potential economic disruptions through policy measures (e.g., universal basic income, retraining programs)
  • Prepare for changes in the job market and workforce needs
  1. Infrastructure:
  • Develop secure and scalable computing infrastructure to support AGI systems
  • Ensure robust cybersecurity measures to protect AGI systems from malicious actors
  1. Scenario planning:
  • Conduct extensive scenario planning to anticipate potential outcomes and challenges
  • Develop contingency plans for various AGI development scenarios
  1. Beneficial AI:
  • Focus on developing AGI systems that can address global challenges (e.g., climate change, disease, poverty)
  • Ensure equitable access to AGI benefits across society
  1. Ongoing monitoring and adaptation:
  • Establish systems to monitor AGI development progress and potential risks
  • Be prepared to adapt strategies as our understanding of AGI evolves

What will like be like for humans after AGI is achieved?

Predicting life after AGI is achieved involves a great deal of speculation, as the impact could be profound and wide-ranging. Here's an overview of potential changes and scenarios:

  1. Economic transformation:
  • Massive automation of jobs across aLL sectors
  • Potential for a post-scarcity economy where basic needs are easily met
  • New economic models may emerge (e.g., resource-based economy)
  • Possible implementation of universal basic income or similar systems
  1. Work and purpose:
  • Traditional employment could become obsolete for many
  • Humans may focus more on creative, social, and personal development pursuits
  • New forms of human-AI collaboration could emerge
  • Increased emphasis on lifelong learning and adaptability
  1. Healthcare and longevity:
  • Rapid advancements in medical research and treatment
  • Potential for significant extension of human lifespan
  • Improved mental health support and personalized medicine
  • Possible integration of AI with human biology (transhumanism)
  1. Education:
  • Personalized, AI-driven learning experiences
  • Continuous education becoming the norm
  • Shift in focus from memorization to critical thinking and creativity
  1. governance and decision-making:
  • AI systems may play a significant role in policy analysis and implementation
  • Potential for more data-driven and efficient governance
  • Questions about the role of human leadership in an AGI world
  1. Social interactions:
  • Changes in how humans interact with each other and with AI
  • Potential for AI companions and relationships
  • New forms of entertainment and social experiences
  1. Environmental impact:
  • AGI could help solve climate change and environmental challenges
  • Optimization of resource use and energy production
  • Potential for off-world expansion and resource gathering
  1. Philosophical and existential questions:
  • Redefinition of human purpose and identity
  • Ethical debates about the rights of AGI and its role in society
  • Exploration of consciousness and the nature of intelligence
  1. Risks and challenges:
  • Potential loss of privacy and autonomy
  • risk of AGI misalignment or unintended consequences
  • Cybersecurity and the need to protect against malicious use of AGI
  1. Scientific advancement:
  • Rapid acceleration in research across all fields
  • Potential for solving currently intractable problems in physics, biology, etc.
  • New avenues of exploration in space and other frontiers

It's important to note that these predictions are speculative and the actual outcome could vary greatly depending on how AGI is developed and implemented. The transition period could be gradual or rapid, and societal adaptation will likely be an ongoing process.

What is a resource based economy?

A resource-based economy (RBE) is an economic model proposed as an alternative to traditional monetary-based systems like capitalism or socialism. Here are the key aspects of a resource-based economy:

  1. Core principle:
    The fundamental idea is that aLL goods and services are available without the use of money, credit, barter, or any form of debt or servitude.
  1. Resource management:
  • Resources are distributed directly to people based on need, rather than through market mechanisms.
  • Emphasis on efficient and sustainable use of Earth's resources.
  • Decisions about resource allocation are made based on scientific data and technological capabilities.
  1. Technology-driven:
  • Heavy reliance on advanced technology, automation, and artificial intelligence for production and distribution.
  • Aim to use technology to eliminate scarcity and provide abundance for all.
  1. Elimination of monetary system:
  • No use of money or any other medium of exchange.
  • Goods and services are provided freely to all members of society.
  1. Focus on sustainability:
  • Strong emphasis on environmental preservation and sustainable practices.
  • Designed to eliminate waste and overconsumption.
  1. Social structure:
  • Aim to eliminate social stratification based on wealth.
  • Focus on cooperation rather than competition.
  1. Work and motivation:
  • Traditional jobs may be largely replaced by automation.
  • People would be free to pursue education, creativity, and personal interests.
  • Motivation would come from personal growth and societal contribution rather than financial gain.
  1. Decision making:
  • Decisions about production and distribution would be based on overall societal needs and resource availability.
  • Could potentially use AI and big data analytics for complex decision-making processes.
  1. Challenges and criticisms:
  • Questions about human nature and whether people would remain motivated without financial incentives.
  • Concerns about individual freedom and choice in resource allocation.
  • Practical challenges in transitioning from current economic systems.

The concept of a resource-based economy was popularized by Jacque Fresco through The Venus Project, though similar ideas have been proposed by others. It remains a theoretical model and has not been implemented on a large scale.

This economic system is often discussed in the context of advanced technological capabilities, including AGI, which could potentially manage the complex logistics and decision-making required for such a system. However, it's important to note that it's a highly speculative concept with many practical and philosophical questions yet to be addressed.

Will AGI be positive or negative for humanity?

The impact of AGI on humanity is likely to be complex and multifaceted, with both positive and negative potential outcomes. It's difficult to predict with certainty, but here's a balanced overview:

Potential positive impacts:

  1. Solving global challenges: AGI could help address complex issues like climate change, disease, and poverty.
  1. Scientific breakthroughs: Accelerated research and discovery in fields like medicine, physics, and space exploration.

  2. Economic productivity: Increased efficiency and productivity in various sectors.

  3. Personalized education and healthcare: Tailored solutions for individual needs.

  4. Enhanced decision-making: Data-driven insights for better governance and policy decisions.

Potential negative impacts:

  1. Job displacement: Widespread automation could lead to significant unemployment.

  2. Economic inequality: Benefits of AGI might not be distributed evenly.

  3. Loss of privacy: Advanced AI systems could infringe on personal privacy.

  4. Security risks: Potential for misuse in cyber attacks or autonomous weapons.

  5. Existential risk: If not properly aligned with human values, AGI could pose a threat to humanity's existence.

Neutral or mixed impacts:

  1. Shift in human purpose: As AGI takes over many tasks, humans may need to redefine their roles and purpose.

  2. Changes in social structures: AGI could dramatically alter how societies function and interact.

  3. Philosophical questions: AGI may challenge our understanding of consciousness and intelligence.

The ultimate impact of AGI will likely depend on how it's developed, implemented, and governed. Careful planning, ethical considerations, and international cooperation will be crucial in steering AGI development towards beneficial outcomes while mitigating risks.

Given the potentially transformative nature of AGI, it's important to approach its development with caution, foresight, and a commitment to aligning it with human values and well-being. The goal should be to maximize the benefits while actively working to prevent or minimize potential negative consequences.

Will AGI bring about an economic singularity?

The concept of an "economic singularity" in relation to AGI is an intriguing and complex topic. Here's an analysis of how AGI might lead to profound economic changes that could be considered a form of singularity:

  1. Definition of economic singularity:
    An economic singularity refers to a hypothetical point where technological growth, particularly AGI, leads to such rapid and fundamental changes in the economy that it becomes difficult or impossible to predict what comes after.
  1. Potential drivers of an economic singularity:
  • Exponential productivity growth: AGI could dramatically accelerate productivity across aLL sectors.
  • Automation at unprecedented scale: AGI could automate not just manual tasks, but also complex cognitive work.
  • Resource optimization: AGI might find novel ways to use resources more efficiently.
  • Innovation acceleration: AGI could speed up the pace of scientific and technological breakthroughs.
  1. Possible economic impacts:
  • Rapid wealth creation: AGI could lead to explosive economic growth.
  • Job market disruption: Many traditional jobs could become obsolete quickly.
  • Shift in value creation: Economic value might increasingly come from AGI-driven innovations rather than human labor.
  • Changes in ownership and capital: Questions may arise about who owns and benefits from AGI-driven production.
  1. Challenges in predicting post-singularity economics:
  • Non-linear growth: Economic models based on historical data may break down.
  • New economic paradigms: Traditional concepts of scarcity, labor, and value might need to be redefined.
  • Rapid change: The speed of change could outpace human and institutional adaptation.
  1. Skepticism and counterarguments:
  • Gradual transition: Some argue that economic changes will be more gradual and manageable.
  • Human adaptation: humans have historically adapted to major technological shifts.
  • Regulatory controls: governments might implement policies to slow or control the economic impact of AGI.
  1. Preparation and policy considerations:
  • Education and reskilling: Preparing the workforce for an AGI-driven economy.
  • Economic safety nets: Considering models like universal basic income.
  • Ethical AI development: Ensuring AGI is developed with consideration for its economic impact.
  • Global cooperation: Managing the potentially uneven global effects of an AGI-driven economic shift.

While the concept of an economic singularity is thought-provoking, it's important to note that it remains speculative. The actual economic impact of AGI will depend on numerous factors, including the specific capabilities of AGI systems, the pace of development and deployment, regulatory environments, and societal responses.

The possibility of an economic singularity underscores the importance of proactive planning and ethical considerations in AGI development. It highlights the need for interdisciplinary collaboration between AI researchers, economists, policymakers, and ethicists to navigate the potential economic transformations that AGI might bring about.

YouTube takes on TikTok Shop with expanded Shopify partnership

YouTube is expanding its partnership with Shopify to onboard more brands for its YouTube Shopping affiliate program.

As TikTok Shop gains traction, YouTube is expanding its partnership with Shopify to onboard more brands for its YouTube Shopping affiliate program, the company announced on Tuesday. The expanded partnership means creators will get access to thousands of new brands to tag in their shopping videos, marking a significant increase from the few hundred brands they currently have access to.

#youtube #spotify #tiktok #shopping #technology

As part of the expansion, all eligible Shopify Plus and Advanced merchants in the U.S. can now sign up for the Shopping affiliate program.

The YouTube Shopping affiliate program gives creators a way to earn money through their videos while helping their audience find new products. For brands, it’s another way to reach customers and increase sales.

The expanded partnership will allow YouTube to be better poised to take on TikTok, which is reportedly aiming to grow its TikTok Shop U.S. business tenfold to $17.5 billion this year. Although TikTok is newer to the U.S. e-commerce space when compared to YouTube, the company has been onboarding merchants at a relatively fast pace, as TikTok reported in April that it had 500,000 merchants in the U.S. on TikTok Shop by the end of 2023.

YouTube also announced that it’s launching a new Chrome extension that will allow creators in the U.S. who are part of the affiliate program to save products while browsing a brand or retailer’s site in order to quickly find them when they’re ready to tag them in a video.

The expansion of the affiliate program comes as YouTube has ramped up its e-commerce efforts over the past few years as a way to bring in more revenue amid a slowdown in digital advertising. Its efforts seem to be working, as YouTube previously announced that people watched more than 30 billion hours of shopping-related videos on YouTube in 2023, and that that platform saw a 25% increase in watch time for videos that help people shop.

Apple Intelligence delays could impede iPhone 16 ‘supercycle’

When Apple unveiled its AI plans at WWDC in June, analysts suggested the feature could put the iPhone 16 on track for another "supercycle."

When Apple unveiled its AI plans at WWDC in June, analysts suggested the feature could put the iPhone 16 on track for another “supercycle.” Like the addition of 5G before it, industry watchers believed that Apple Intelligence’s arrival might convince holdouts to bite the bullet and upgrade their device.

We’ll have a much better picture when the company reveals the iPhone 16 at tomorrow’s “It’s Glowtime” event in Cupertino. In the meantime, however, it’s believed that Apple Intelligence’s slow rollout could significantly hamper supercycle potential.

#technology #apple #iphone #newsonleo

In June, Apple showed the world a take on AI firmly in-line with its mission statement of prioritizing user experience. Trained on small models — instead of the massive black boxes behind services like Google Gemini and OpenAI’s ChatGPT — Apple’s play is centered around improving existing experiences through generative AI.

As Bloomberg notes today, however, delays could impact the iPhone 16’s bottom line. WWDC offered a lot of big promises, but we’re unlikely to see many delivered until 2025. Regional limitations present key roadblocks as well, including in the EU and China — the latter of which can make or break Apple financials in any given quarter. ChatGPT integration, meanwhile, is also unlikely for launch.

In the lead up to WWDC, the industry wondered whether Apple would be able to become a major player with the generative AI competition. But while the developer conference wowed observers with impressive demos, there’s still a lot of catching up to do.

At the time of writing, iPhone 15 Pro/Pro Max are the only devices approved to run Apple Intelligence. Tomorrow’s event is set to change that, especially as further rumors point to increased uniformity across the iPhone 16 line. The base models are expected to get A18 chips, with the Pros, fittingly, getting an A18 Pros.

While sales are incredibly important to Apple and its shareholders, Apple Intelligence is a massive, multi-faceted undertaking, and rushing it out too early could have a longer term adverse impact on the company’s bottom line than a staggered rollout.

Bending Spoons plans to lay off 75% of WeTransfer staff after acquisition

Italy-based app company Bending Spoons, which owns Evernote and Meetup, is planning to lay off 75% of the staff of file transfer service WeTransfer,

Italy-based app company Bending Spoons, which owns Evernote and Meetup, is planning to lay off 75% of the staff of file transfer service WeTransfer, TechCrunch has learned. Bending Spoons acquired the Dutch company in July for an undisclosed amount.

#technology #newsonleo #bendingspoon #wetransfer

The company confirmed the plans for the WeTransfer layoff to TechCrunch. The staff that is being let go will be informed after Bending Spoons goes through local regulations in different countries regarding lay offs. Dutch media reported that WeTransfer has over 350 employees.

Bending Spoons CEO Luca Ferrari said that typically, the company studies the organization that it is acquiring and determines what is the best vision of the business going forward with an aim to operate it forever.

“Once the vision is clear, we try to close the gap between the status quo and the vision as quickly and as fully as we can. Typically, doing so involves making major changes to many areas, including the organization. While we don’t enjoy making painful or unpopular decisions, we’re prepared to do so when we believe it’s the right thing to do to help the business thrive,” he said.

Ferrari also noted that in WeTransfer’s case, Bending Spoons wanted to have a small and focused team.

“In this particular case, the vision we developed is of a smaller, more sharply focused WeTransfer organization, which we believe will be better-positioned to serve WeTransfer’s success with a long-term view,” the CEO said.

Since its acquisition, WeTransfer has rolled out a new feature to let users extend the expiry date of a link on the app. However, Bending Spoons or WeTransfer hasn’t disclosed their plans about long-term product roadmap.

The Guardian: Apple to unveil iPhone 16 and ‘Apple Intelligence’ AI features


https://www.theguardian.com/technology/article/2024/sep/09/apple-ai-iphone-16

The Guardian: ‘If journalism is going up in smoke, I might as well get high off the fumes’: confessions of a chatbot helper


https://www.theguardian.com/technology/article/2024/sep/07/if-journalism-is-going-up-in-smoke-i-might-as-well-get-high-off-the-fumes-confessions-of-a-chatbot-helper

The Guardian: Goodbye Tinder, hello Strava: have ‘hobby’ apps become the new social networks?


https://www.theguardian.com/technology/article/2024/sep/08/goodbye-tinder-hello-strava-have-hobby-apps-become-the-new-social-networks

The Guardian: Telegram chief’s arrest sends a clear message: tech titans are not above the law


https://www.theguardian.com/commentisfree/article/2024/sep/07/telegram-chiefs-arrest-sends-a-clear-message-tech-titans-are-not-above-the-law

Thousands of Avis car rental customers had personal data stolen in cyberattack

The car rental giant says personal information, credit card information, and driver's license numbers were stolen in the August cyberattack.

Car rental giant Avis is notifying hundreds of thousands of people that their personal information and driver’s license numbers were stolen in an August cyberattack.

The New Jersey-headquartered company said in a data breach notice filed with several U.S. attorneys general over the past week that it discovered intruders in one of its business applications on August 5 and took efforts to end the unauthorized access, which the company said began two days earlier.

#newsonleo #avis #hack #technology

Avis did not disclose the nature of the cyberattack and details of the incident remain scarce. An Avis spokesperson did not respond to an email requesting comment about the cyberattack.

In a data breach notice filed late last week with Iowa’s attorney general, the car rental company said that the stolen information includes customer names, mailing addresses, email addresses, phone numbers, their date of birth, credit card numbers and expiration dates, and driver’s license numbers. It’s not yet known why Avis stored this sensitive customer information in a way that allowed it to be compromised.

On Monday, a filing with Maine’s attorney general revealed that Avis’ data breach affects a total of 299,006 individuals to date. A separate filing with Texas’ attorney general reported that Texas had the most number of affected state residents at 34,592 individuals.

Further data breach notices are expected to be filed in the coming weeks with the remaining attorneys general. It’s not yet known if the number of individuals affected by the Avis data breach will rise.

Avis, which owns the Budget car hire and Zipcar car-sharing brands, has more than 10,000 rental locations in 180 countries, according to the company’s most recent full-year financial earnings reported in February. Avis made $12 billion in revenue during 2023, and the company’s chief executive Joe​ Ferraro reported $10.2 million in total compensation that year.

BBC: A revolutionary gin from Sir Isaac Newton's iconic apple tree

After more than 300 years, Newton’s law of gravitation is still palpable at a revolutionary Cambridge distillery.



https://www.bbc.com/travel/article/20240906-a-revolutionary-gin-from-sir-isaac-newtons-iconic-apple-tree

BBC: Which rural area will take the UK’s nuclear waste?


https://www.bbc.com/news/articles/czx6e2x0kdyo

BBC: Google's lucrative ad tech business goes on trial


https://www.bbc.com/news/articles/ckg2dpkpmv1o

Electro-hydraulic muscles help these robot legs stand straight on uneven terrain

Along with the ability to automatically adjust to the surface they’re traversing, the legs move faster and jump higher than their more standardized electric counterparts.

Researchers at ETH Zürich are very good at keeping robots standing upright. Back in 2022, the school’s robotics team taught the quadrupedal ANYmal robot how to hike up mountains without falling over. New research from the school, published in collaboration with the Stuttgart-based Max Planck Institute for Intelligent Systems, takes a unique approach to the problem of traversing uneven terrain.

#newsonleo #robotos #technology

Artificial muscles showcased by the schools are powered by a hybrid electro-hydraulic system. Along with the ability to adjust automatically to the surface they’re traversing, the legs move faster and jump higher than their more standardized electric counterparts, all without requiring additional sensors or controls.

The legs’ actuators are surprisingly simple. The teams describe them as “oil-filled plastic bags, similar to those used to make ice cubes.” Those bags are then covered in electrodes. Honestly, the whole thing sounds a bit like a school science project.

“[A]s soon as we apply a voltage to the electrodes, they are attracted to each other due to static electricity,” says grad student Thomas Buchner. “Similarly, when I rub a balloon against my head, my hair sticks to the balloon due to the same static electricity.”

The bags then expand or contract, based on the voltage being applied. Unlike standard electric actuators, the system doesn’t generate a lot of heat.

So the actuators help the system traverse uneven terrain and help it jump high. In terms of real-world use, however, the system still has a long way to go.

“Compared to walking robots with electric motors, our system is still limited. The leg is currently attached to a rod, jumps in circles and can’t yet move freely,” says Christoph Keplinger, a professor at Max Planck Institute. “If we combine the robotic leg in a quadruped robot or a humanoid robot with two legs, maybe one day, when it is battery-powered, we can deploy it as a rescue robot.”

European VC Atomico closes $1.24B across two funds for early and growth-stage startups

Atomico has announced new funds totalling $1.24 billion, as the VC giant targets early- and growth-stage startups across Europe.

As European startups continue to look for signs of sustained market confidence beyond the hype around AI companies, Atomico — one of the region’s more iconic, largest venture capital firms — has raised more money to make investments that might indicate how the market is really moving. The VC has closed new funds totalling $1.24 billion to back early- and growth-stage startups across the region.

#startup #technology #newsonleo #atomico

London-based Atomico is describing this as its “largest ever fundraise,” although technically it is across two pots of money. “Atomico Venture VI” weighs in at $485 million for mostly Series A-stage companies (with some reserved for seed), and a separate $754 million fund — dubbed “Atomico Growth VI” — is for Series B through pre-IPO.

Raising and allocating money from separate funds is typical of many venture capital firms today, but that Atomico closed two separate funds, led by separate teams, is notable. The firm has historically leaned toward earlier funding rounds while dipping into later stages where it made sense. Now it’s setting itself up to focus just as much on the later stages of a startup’s journey with a dedicated fund.

This move could also point to a trepidation among some in the investor fraternity who are hesitant to put money into fledgling pre-profit companies. By setting things up this way, it becomes easier for Atomico to bring contributions from more risk-averse limited partners (LPs) into the fray by enabling them to channel their cash into tried and tested businesses, rather than backing a single fund that may span anything from seed to Series F.

The news also comes amidst a downturn in the global venture capital sphere, a trend to which Europe has not been impervious.

Among the things on which Atomico has built a reputation is its annual research reports on the state of the European technology ecosystem, which focus on how the venture capital end of the market is faring. Its most recent report made for grim reading, noting that amid an ongoing downturn, European startup funding halved in 2023, driven by factors such as geopolitical events, inflation, and interest rates. It also determined that the market, and investment data, had been skewed by 2021 and 2022 because of the global pandemic

Writing on his social media platform X (formerly Twitter), Musk said he hadn’t read the WSJ story, but he described a post summarizing the report as “not accurate.”

“Tesla has learned a lot from discussions with engineers at xAI that have helped accelerate achieving unsupervised FSD, but there is no need to license anything from xAI,” he wrote. “The xAI models are gigantic, containing, in compressed form, most of human knowledge, and couldn’t possibly run on the Tesla vehicle inference computer, nor would we want them to.”

Musk founded xAI as a competitor to OpenAI (which he co-founded but eventually left). TechCrunch reported earlier this year that as part of the pitch for xAI’s $6 billion funding round, the startup outlined a vision where its models would be trained on data from Musk’s various companies (Tesla, SpaceX, The Boring Company, Neuralink, and X), and its models could then improve technology across those companies.

Elon Musk says Tesla has ‘no need’ to license xAI models

Elon Musk has denied a report that one of his companies, Tesla, has discussed sharing revenue with another of his companies, xAI

Elon Musk has denied a report that one of his companies, Tesla, has discussed sharing revenue with another of his companies, xAI, so that it can use the startup’s AI models.

The Wall Street Journal wrote yesterday that under a proposed agreement described to investors, Tesla would use xAI models in its driver-assistance software (known as Full Self-Driving or FSD). The AI startup would also help develop features such as a voice assistant in Tesla vehicles and software for Tesla’s humanoid robot Optimus.

#xai #technology #tesla #elonmusk

La French Tech gears up to go in a new direction

After weeks in political limbo, France now has a new prime minister, former EU's Brexit negotiator Michel Barnier.

After weeks in political limbo, France now has a new prime minister, former EU’s Brexit negotiator Michel Barnier. But parliament remains bitterly divided, generating uncertainty for many economic sectors — including the country’s dynamic startup ecosystem, which had so far benefited from public support.

#france #technology #politics #newsonleo

La French Tech isn’t just a term referring to France’s 25,000 startups; it is also an initiative supported by a public administration, the French Tech Mission, whose director, Clara Chappaz, is departing this month as her three-year contract ends, she told Les Échos. The timing is coincidental, but also noteworthy. Her replacement, she told TechCrunch in late August, should hopefully be someone who is “not afraid of change” and sees it as an opportunity.

The role is undoubtedly unique. It’s attached to the Ministry of the Economy and Finance, but with a lean team working out of Station F and with a big mission: to support the structuring and growth of the French startup ecosystem of startups, in France and abroad. It requires someone who is equally able to talk to public officials, politicians, big companies and journalists.

It is now too late to apply, but seeing how it has been described as a “dream job,” there should be enough candidates on the ranks; Chappaz, who won’t be part of the jury, said when we talked in August that she was taking several calls a day from prospective candidates. She likely told them pretty much the same as she told TechCrunch and her LinkedIn followers: That France is “incredibly lucky” to have its French Tech Mission and that her experience at its helm was “exceptional.”

Very interesting video about AI and the development of it.

I am going to do an article about this.

#ai #technology

A great video about the future of #ai.

The Rise of Transformers: A Conversation with AI Pioneer Andre Karpathy

AI research has undergone a remarkable transformation in recent years, and at the forefront of this revolution is Andre Karpathy, a founding team member of OpenAi and the former Tesla Autopilot leader. In a captivating interview on the No Prior podcast, Karpathy delved into the advancements and challenges of modern AI development, with a particular focus on the groundbreaking Transformer architecture.

The Transformer: A Magical Breakthrough

Karpathy highlighted the Transformer as a pivotal innovation in the field of AI. Developed by Google in their 2017 research paper "Attention is aLL You Need," the Transformer represents a significant departure from previous neural network architectures like LSTMs. According to Karpathy, the Transformer is a "beautiful blob of tissue" that can be applied to a wide range of tasks, provided it has access to the right data.

One of the key advantages of the Transformer, Karpathy explained, is its ability to scale gracefully with increased computational resources. As the amount of compute power dedicated to the Transformer model is increased, the quality of its outputs improves dramatically, often to the point of producing lifelike, high-fidelity results. This scaling property, known as the "scaling laws," is a hallmark of the Transformer and a testament to its versatility.

Karpathy attributed the Transformer's success to a combination of several innovations, including residual connections, layer normalization, the attention mechanism, and the absence of saturating nonlinearities. These elements, when combined, have created a "magical" piece of technology that can be trained to perform a wide variety of tasks.

Shifting Focus: From Architecture to Data and loss Functions

While the Transformer has been a transformative breakthrough, Karpathy noted that the focus in the AI community has shifted away from the architecture itself. He observed that companies and researchers are nOW more concerned with the quality and availability of data, as well as the design of the loss functions used to train these models.

Karpathy highlighted the potential of synthetic data as a solution to the perceived "data wall" that AI systems may face. He discussed the importance of maintaining diversity and entropy in synthetic data, to avoid the problem of "silent collapse" where models become overly specialized and lose the richness of their outputs.

One example of synthetic data innovation is the Persona Hub, a dataset of 1 billion unique personas that can be used to inject diversity and context into training data. By associating tasks and prompts with these diverse personas, Karpathy believes AI systems can be trained to explore a richer space of possibilities, ultimately leading to more capable and robust models.

Toward Human-AI Symbiosis

Karpathy also contemplated the relationship between AI systems and the human brain, noting that in some cognitive aspects, Transformers may even surpass the capabilities of the human brain. He pointed out that Transformers excel at tasks like memorizing and completing sequences, which are areas where the human brain faces significant limitations.

This raises the intriguing possibility of human-AI augmentation, where powerful AI models could serve as "exocortices" that extend and enhance human cognitive abilities. Karpathy acknowledged that while the exact form of this merger remains uncertain, the potential for AI to act as a symbiotic partner to humans is an area of active exploration and discussion within the AI community.

As the field of AI continues to evolve, the insights and predictions shared by Andre Karpathy offer a compelling glimpse into the future of this dynamic and rapidly advancing technology. The rise of the Transformer and the shift in focus toward data and loss functions suggest that the path to artificial general intelligence (AGI) may lie in a delicate balance between architectural innovation and the careful cultivation of training data and objectives.

The Transformative Power of Transformers

At the heart of Karpathy's discussion was the remarkable Transformer architecture, which he described as a "magical" breakthrough in the field of AI. Developed by Google in 2017, the Transformer's ability to scale gracefully with increased computational resources has been a game-changer, enabling it to produce remarkably lifelike and high-fidelity results.

Karpathy attributed the Transformer's success to a combination of key innovations, including residual connections, layer normalization, the attention mechanism, and the absence of saturating nonlinearities. These elements, when combined, have created a versatile and powerful neural network that can be trained to tackle a wide range of tasks.

Toward Human-AI Symbiosis

Karpathy also contemplated the relationship between AI systems and the human brain, noting that in some cognitive aspects, Transformers may even surpass the capabilities of the human brain. He pointed out that Transformers excel at tasks like memorizing and completing sequences, areas where the human brain faces significant limitations.

This raises the intriguing possibility of human-AI augmentation, where powerful AI models could serve as "exocortices" that extend and enhance human cognitive abilities. Karpathy acknowledged that while the exact form of this merger remains uncertain, the potential for AI to act as a symbiotic partner to humans is an area of active exploration and discussion within the AI community.

The Democratization of AI: Empowering Individuals through education

Karpathy's passion for education and his desire to empower individuals emerged as a central theme in the interview. He expressed a strong interest in using AI to democratize access to high-quality education, rather than simply automating and displacing human workers.

Karpathy envisioned a future where AI-powered tutors could personalize the learning experience for each student, catering to their unique backgrounds and learning styles. By harnessing the power of language models and translation capabilities, Karpathy believes these AI tutors could provide truly global and accessible education, unlocking the full potential of every individual.

The Road Ahead: Navigating the Challenges and opportunities

As the AI landscape continues to evolve, Karpathy acknowledged the complexities and potential pitfalls that must be navigated. The balance between open-source and closed-platform models, the risk of "renting" one's cognitive abilities, and the need to maintain diversity and entropy in synthetic data are just a few of the critical considerations.

Yet, Karpathy remains optimistic about the future, believing that AI can be harnessed to empower and enhance human capabilities, rather than replace or subjugate them. His vision of a future where AI serves as a symbiotic partner, augmenting and accelerating human potential, offers a compelling and hopeful path forward in this rapidly transforming technological landscape.

The further the auto industry develops, the less opportunities there are to get to the engine oil drain nut and filters yourself. Go to a car service! - they say :)
!BEER


Hey @taskmaster4450le, here is a little bit of BEER from @barski for you. Enjoy it!

Did you know that <a href='https://dcity.io/cityyou can use BEER at dCity game to buy cards to rule the world.

2 MINUTES AGO: Gemini Just Got a HUGE Upgrade - ‘Gems’ Assistants & Imagen 3 Are Here!

#ai #google #gems

Google Unveils Revolutionary AI Upgrades: Gemini and Image In 3

In a groundbreaking move, Google has announced major upgrades to its Gemini AI platform, introducing "Gems" - personalized AI assistance designed for specific needs, and Image In 3, an advanced image generation model that blurs the line between reality and imagination. These updates are now available in over 150 countries, democratizing AI technology and making powerful tools accessible to everyone.

GEMs: Personalized AI Assistance

Gems are hyperspecialized AI assistants designed to perform specific tasks exceptionally well. They can be tailored to individual needs, making them useful for various applications, such as:
education: AI tutors for personalized learning
healthcare: Diagnostic assistants for faster and more accurate diagnoses
Business: market trend analysis and customer behavior insights

Image In 3: Advanced Image Generation

Image In 3 is a cutting-edge image generation model that produces photorealistic visuals from simple text prompts. Its capabilities have far-reaching implications for industries like advertising, virtual reality, and digital art.

Ethical Considerations

Google has incorporated syn ID watermarking technology into Image In 3 to ensure AI-generated content is distinguishable from real photographs, addressing concerns around deep fakes and misinformation.

Competitive Landscape

Google's upgrades position the company as a leader in AI innovation, differentiating itself through specialization and ethical considerations. The AI market is increasingly crowded, with competitors like Open AI, Microsoft, Meta, and Hugging Face vying for top spot.

Implications and Future Directions

The Gemini upgrades mark the beginning of a transformative period in AI development. As AI becomes more ingrained in daily life, concerns around data privacy and job displacement must be addressed. Google's commitment to robust safety measures and ethical considerations will be crucial in navigating these challenges.

The future of AI holds exciting opportunities and challenging questions. As the competition heats up, the industry will be watching to see how users respond to these new tools. One thing is certain - the AI revolution is just getting started.

Jensen Huang, the CEO of Nvidia, wants to make everyone a programmer. That is how he feels we can bridge the digital divide.

Anyone willing to bet against him?

Software is increasingly the most important part of a car, something that traditional have yet to fully grasp,and that still think of it in terms of entertainment and GPS. As a result, they continue using very old chips that are incapable of supporting the new features that the latest vehicles require. Most car companies are still run by petrol-heads who only think in terms of compression, cylinders and fuel consumption, factors in an already obsolete internal combustion technology, and so tend to outsource software to a third party, seeing it as a nuisance

#technology #automotive

The encryption guarding digital communications could someday be cracked by quantum computers. Dubbed 'Q-day,' that moment could upend military and economic security worldwide. Great powers are sprinting to get there first.

In February, a Canadian cybersecurity firm delivered an ominous forecast to the U.S. Department of Defense. America’s secrets – actually, everybody’s secrets – are now at risk of exposure, warned the team from Quantum Defen5e (QD5).

#technology #quantum #computing #china #us

QD5’s executive vice president, Tilo Kunz, told officials from the Defense Information Systems Agency that possibly as soon as 2025, the world would arrive at what has been dubbed “Q-day,” the day when quantum computers make current encryption methods useless. Machines vastly more powerful than today’s fastest supercomputers would be capable of cracking the codes that protect virtually all modern communication, he told the agency, which is tasked with safeguarding the U.S. military’s communications.

In the meantime, Kunz told the panel, a global effort to plunder data is underway so that intercepted messages can be decoded after Q-day in what he described as “harvest now, decrypt later” attacks, according to a recording of the session the agency later made public.

Militaries would see their long-term plans and intelligence gathering exposed to enemies. Businesses could have their intellectual property swiped. People’s health records would be laid bare.

“We are not the only ones who are harvesting, we are not the only ones hoping to decrypt that in the future,” Kunz said, without naming names. “Everything that gets sent over public networks is at risk.”

What is an NPU?

An NPU, or Neural Processing Unit, is a dedicated processor or processing unit on a larger SoC designed specifically for accelerating neural network operations and AI tasks. Unlike general-purpose CPUs and GPUs, NPUs are optimized for a data-driven parallel computing, making them highly efficient at processing massive multimedia data like videos and images and processing data for neural networks. They are particularly adept at handling AI-related tasks, such as speech recognition, background blurring in video calls, and photo or video editing processes like object detection.

#npu #computing #ai #technology

NPUs are integrated circuits but they differ from single-function ASICs (Application-Specific Integrated Circuits). While ASICs are designed for a singular purpose (such as mining bitcoin), NPUs offer more complexity and flexibility, catering to the diverse demands of network computing. They achieve this through specialized programming in software or hardware, tailored to the unique requirements of neural network computations.

For most consumer products, the NPU will actually be integrated into the main CPU, as in the Intel Core and Core Ultra series or the new AMD Ryzen 8040-series laptop processors. At larger data centers or more specialized industrial operations, though, the NPU might be an entirely discrete processor on the motherboard, separate from any other processing units.

NPU vs. GPU

While many AI and machine learning workloads are run on GPUs, there is an important distinction between the GPU and NPU.

While GPUs are known for their parallel computing capabilities, not all GPUs are good at doing so beyond processing graphics, as they require special integrated circuits to effectively process machine learning workloads. The most popular Nvidia GPUs have these circuits in the form of Tensor cores, but AMD and Intel have also integrated these circuits into their GPUs as well, mainly for handling resolution upscaling operations — a very common AI workload.

NPUs, meanwhile, simply take those circuits out of a GPU (which does a bunch of other operations) and make it a dedicated unit on its own. This allows it to more efficiently process AI-related tasks at a lower power level, making them ideal for laptops, but also limits their potential for heavy-duty workloads that will still likely require a GPU to run.

The role of NPUs in different systems

NPUs are designed to complement the functions of CPUs and GPUs. While CPUs handle a broad range of tasks and GPUs excel in rendering detailed graphics, NPUs specialize in executing AI-driven tasks swiftly. This specialization ensures that no single processor gets overwhelmed, maintaining smooth operation across the system.

For instance, in video calls, an NPU can efficiently manage the task of blurring the background, freeing up the GPU to focus on more intensive tasks. Similarly, in photo or video editing, NPUs can handle object detection and other AI-related processes, enhancing the overall efficiency of the workflow.

The Complete History of BlackBerry: From Pagers to Phones to Software

Childhood friends Mike Lazaridis and Douglas Fregin started a computer consulting company called Research In Motion (RIM) above a bagel shop in 1984. Little did they know this modest startup would evolve into the iconic BlackBerry brand behind some of the earliest and most popular smartphones.

Over nearly 40 years, BlackBerry pioneered push email on mobile devices, popularized full Qwerty keyboards, and rose to dominate workplace communication. But failure to adapt to full touchscreens and app ecosystems led to its swift decline. Ultimately, BlackBerry stopped making phones altogether, instead finding new life developing automotive software and cybersecurity solutions.

#blackberry #technology #history #smartphone

This winding journey reflects the rapidly changing tides of technology and consumer tastes. Let‘s explore the complete history behind one of the tech industry‘s most spectacular rises, falls, and rebounds.

The Early Days: Wireless Pioneers

Mike Lazaridis enrolled at the University of Waterloo in Ontario, Canada to study electrical engineering in the early 1980s. There he met his future business partner, Doug Fregin. After graduating, Lazaridis saw opportunities to commercialize breakthroughs from his university research.

In 1984, Lazaridis and Fregin rented space above a bagel shop to found Research In Motion (RIM). Their first product was a system for integrating wireless data processors and pagers. This allowed service technicians to send data messages directly from pagers rather than locating payphones.

By the late 1980s, RIM was working on technology for mobile point-of-sale credit card processing. This caught the attention of Jim Balsillie, an industrious Harvard MBA graduate. Balsillie invested $125,000 to join RIM and spearheaded business development activities to drive rapid growth.

Under Balsillie‘s leadership, RIM began expanding internationally. By the mid 1990s, the company had a staff of over 30 developing electronics hardware and radio frequency wireless communications technology.

The BlackBerry Name Emerges

In 1996, RIM introduced an early predecessor to the BlackBerry called the RIM 900 Inter@ctive Pager. The 900 stood out with its QWERTY keyboard for efficient two-way messaging between customers and dispatchers.

As the story goes, when employees saw the keyboard layout, it reminded them of the seeds of blackberries. Hence the BlackBerry name was coined.

The RIM 900 Inter@ctive Pager was a sophisticated device for its time. It utilized an Intel 386SXLV processor, 2MB of system memory, and offered parcel tracking apps by integrating with logistics networks. The full keyboard enabled workflow improvements in the field – characteristic of RIM‘s focus on enterprise utility.

In 1997, RIM debuted on the Toronto Stock Exchange, raising over $115 million dollars. At the time, products consisted mostly of hardware like wireless modems and radio transmitter base stations.

But in 1999, RIM took a pivotal step by launching BlackBerry wireless email service in North America. This delivered secure, real-time email access tailored for businesses and eventually consumers.

To access the service, RIM introduced its first BlackBerry handhelds, the 850 and 957. Though lacking phone capabilities, these let users wirelessly surf web pages and manage contacts beyond just email.

The 850 and 957 leveraged advanced radio hardware, OS-level Java applications and powerful batteries to enable seamless messaging and navigation. Push email in particular drove rapid user growth as the service expanded globally.

Leaping to Smartphone Dominance

In the early 2000s, mobile phones that could access data services started gaining steam. RIM entered this burgeoning smartphone market in 2002 with the 5810.

The 5810 stood out for bundling email, web browsing, enterprise apps and phone capabilities in a compact, Qwerty-sporting device. It also pioneered the side scrolling wheel for quick navigation in the absence of touch screens.

Over the next several years, BlackBerry cemented itself as the smartphone brand of choice by appealing to business users. Its hallmarks – seamless email, robust physical keyboards, BBM messaging – increased productivity for professionals on the go.

New models dramatically boosted resolutions, battery life, Bluetooth connectivity and browsing capabilities. Choice expanded to suit preferences for Keyboards, trackballs over track pads and screen real estate.

By 2007, BlackBerry had sold over 9 million smartphone devices globally across sectors like legal, finance, healthcare and engineering. "Crackberry" addiction became a very real phenomenon.

Dismissing the iPhone Proves Costly

While BlackBerry focused squarely on enterprise users, Apple secretly plotted to launch a consumer-friendly smartphone for the masses.

When the pioneering iPhone debuted in 2007, it felt like a smartphone from the future compared to rivals. Touchscreens, accelerometers and gestures massively improved ease of use. iOS visual flair and animations outshined BlackBerry‘s dated Java interfaces.

However, BlackBerry‘s leadership dismissed the potential threat. After all, the iPhone lacked a physical keyboard optimized for email. And Apple couldn‘t match BlackBerry‘s security capabilities that encrypted data and selectively wiped devices. These were must-haves for regulated industries.

But average consumers quickly took to the iPhone‘s lush touchscreen and growing app ecosystem. It began making headway with business owners and younger professionals too.

In 2008, BlackBerry made a rare misstep with the Storm. Verizon commissioned this exclusive new phone to answer the iPhone with clickable touchscreen technology. Unfortunately, rushed hardware and software development resulted in a buggy, glitchy launch.

The Storm was panned as the most high-profile BlackBerry failure. Pre-orders topped 1 million units, but actual sales fell well short. Worse, Storm returns and support calls became a serious strain on finances and morale.

The Storm flop kicked off a sales slump made worse by the global recession. It also gave further momentum to Apple and the newly launched Android platform. Enterprise customers held off upgrade cycles as uncertainty grew.

Attempts to Save BlackBerry

Recognizing the iPhone‘s creeping rise, BlackBerry tried to answer with more consumer-friendly phones. But devices like the BlackBerry Torch failed to compete on app selection or processing punch.

Behind the scenes, years of product delays and internal disagreements took their toll. Long-time co-CEOs Mike Lazaridis and Jim Balsille resigned in 2012, ceding control to new blood.

The next year, BlackBerry rebooted with the Z10 and Q10 – its first phones powered by the slick, new BlackBerry 10 OS. The company even rebranded itself from RIM to BlackBerry to unify its brand. However, neither model sold well enough to reverse fortunes.

By fall of 2013, BlackBerry agreed to a rescue investment deal led by Fairfax Holdings. But weeks later, the deal collapsed as no other backers emerged. Shortly after, CEO Thorsten Heins resigned, putting the future in jeopardy yet again.

A Legacy Brand Exits Smartphones

In a surprise move, BlackBerry turned to John Chen – an enterprise software expert from Sybase – to take the helm in late 2013. Having steered previous turnarounds, Chen moved quickly to cut costs and refocus R&D efforts.

Most importantly, BlackBerry began transitioning from selling devices to monetizing software and services. While newer phones like the KEYone continued receiving rave reviews, hardware was no longer make-or-break.

By 2016, BlackBerry formally exited designing smartphones to concentrate fully on software. New cybersecurity acquisitions helped the company pivot toward automotive, government and enterprise verticals. Slowly but surely, the brand climbed back from the brink under Chen‘s leadership.

Today BlackBerry provides intelligent security software securing over 500 million endpoints globally. Its QNX platform is embedded into over 215 million vehicles to enhance driver safety and experience. The legend clearly lives on!

BlackBerry‘s Pivotal Software & Services Acquisitions

While BlackBerry devices faded from the limelight, a series of major software acquisitions quietly built the foundations for an impressive turnaround. Let‘s analyze some key deals that fueled BlackBerry‘s rise as an enterprise software provider:

QNX Software Systems – $200M (2010)

Hardly known outside tech circles, QNX Software Systems specialized in ultra reliable operating systems. Their code powered mission critical systems ranging from nuclear power plants to Cisco network routers.

BlackBerry‘s leadership saw big potential in QNX for powering the next generation of automobiles. The acquisition brought top notch kernel security, safety certifications and a proven microkernel architecture.

Over the next decade, QNX became the de facto standard for in-vehicle infotainment. It now ships on over 215 million cars globally – from BMWs to Chevrolets.

Good Technology – $425M (2015)

Good Technology was a leading mobile security platform securing and managing mobile devices for governments and regulated industries. Buying the company defended BlackBerry‘s stronghold among security-conscious CIOs.

In particular, Good solidified BlackBerry‘s multi-OS strategy by adding advanced iOS and Android management. It also brought complementary mobile productivity and file sharing tools.

The Department of Defense approved Good‘s encrypted solutions across all branches a year after BlackBerry‘s purchase. This underscored the combined entity‘s gold standard reputation.

Cylance – $1.4B (2018)

Cylance developed trailblazing security products leveraging AI and machine learning algorithms. At the time of acquisition, they were adding over 1,700 new endpoints daily to its next-gen antivirus solutions.

Owning Cylance‘s maturing AI security capabilities perfectly aligned with BlackBerry‘s future roadmap. It equipped them to combat rapidly evolving malware and insider threats using predictive methods.

Months later, BlackBerry integrated Cylance into core product suites for unified endpoint protection. Together, they staunchly defend 500M-plus endpoints worldwide from impending cyber attacks.

The Future Ahead

While BlackBerry no longer sells smartphones, its software and services fuel devices across essential industries. As cars, mobile networks, utilities and cities grow more connected than ever, BlackBerry innovation helps drive meaningful change securely.

Rather than chasing consumer gadgets, BlackBerry now enables smarter vehicles, faster networks, seamless logistics and safer infrastructure. And its reliable code continues running smoothly behind the scenes – much as it did at BlackBerry‘s peak.

For example, BlackBerry recently announced IVY – an intelligent vehicle data platform co-created with Amazon Web Services. By centralizing and analyzing volumes of car sensor information, IVY aims to accelerate software upgrade cycles and enable mass personalization.

Does this pivot lay the foundations for continued relevance over the next decade? We shall see – but writing off BlackBerry seems unwise given its storied history of refusing to back down.

And if recent rumors pan out, 2023 could potentially witness the rebirth of a classic BlackBerry smartphone. A 5G model with a physical keyboard yet modern Android 11 software could trigger a bout of nostalgia.

While the hardware may look familiar, there is no doubt the newly software-focused BlackBerry has come a long way since its peaks and valleys selling mobile devices. With its legacy now firmly rooted in digital security, the future looks bright for this enduring mobile pioneer.

What is the impact of AGI once it is achieved?

Food for thought.

iPhone 16 will have a new Camera Control button that will drive Apple users nuts

The tech giant today unveiled its iPhone 16 and iPhone 16 Plus which harnesses cutting-edge hardware capabilities, an even more sleek and sustainable design, and introduces ChatGPT-powered artificial intelligence, allowing users to make even more nuanced commands.

The iPhone 16 and iPhone 16 Plus start at $799 and $899, respectively.

#apple #iphone #technology #newsonleo

The iPhone, which comes in 6.1 inches or 6.7 for its Plus model, introduces a new camera-specific slide button on the phone’s lower half. This “camera control” feature automatically brings up the camera’s app and, when lightly tapped, allows users to toggle a drop-down of many high-end optic features to enhance pictures.

The camera itself is 48 megapixels, four times the resolution of iPhone 14.

Next month, iPhone 16 users will test drive Apple Intelligence, a generative AI that has a bevy of writing and tone enhancements — Apple even says it can cushion Slack messages to your boss — among the ability to create custom images and emojis from written prompts.

Siri also has a much more conversational ability to understand the context of conversations both spoken and written. It can scan text from messages and pull up music albums and other related things that are searchable.

In the iPhone’s camera roll, the new software builds on searchable features as well. Now users can promptly filter through their images and video by searching for names of people and actions they were doing to make collections of memories from specific dates.

Again, the new iPhone’s camera sees a major upgrade when paired with Apple Intelligence to essentially scan the world around you. If you show it a restaurant’s exterior, it can pull up hours, reviews, and menus, and can also scan flyers to input dates into calendars.

DOJ launches case against Google in historic trial over digital ad dominance, attorney warns of 'unintended consequences'

Justice Department attorneys took aim at Google’s alleged monopoly over the digital advertising market on Monday in a federal antitrust case that poses a major threat to the Big Tech giant

Justice Department attorneys took aim at Google’s alleged monopoly over the digital advertising market on Monday in an antitrust case that poses a major threat to the Big Tech giant’s business model.

Opening arguments in the closely watched trial kicked off in a Virginia courtroom and drew a massive crowd of onlookers.

#google #newsonleo #doj #technology #search

Google’s opening statement was delivered by attorney Karen Dunn — a top litigator at white-shoe law firm Paul Weiss whose close ties to Democratic presidential candidate Kamala Harris have generated criticism from anti-monopoly watchdogs, as The Post has reported.

The DOJ and a coalition of US states are seeking a breakup of Google’s advertising business, including a forced divestment of Google’s Ad Manager product.

The feds allege that Google harms publishers and businesses alike by abusing its role as the primary gatekeeper for online ad deals.

“It’s worth saying the quiet part out loud,” DOJ attorney Julia Tarver Wood said during her opening statement. “One monopoly is bad enough. But a trifecta of monopolies is what we have here.”

The trial began just weeks after the DOJ won a stunning victory in a separate case targeting Google’s dominance over online search.

X global affairs boss Nick Pickles is latest top exec to leave after decade with company

Nick Pickles joined the company in 2014. In June, X’s head of business operations, Joe Benarroch left the company.

X’s global affairs head Nick Pickles is quitting Elon Musk’s embattled social media platform after a decade with the company.

#newsonleo #x #technology

Pickles said he had decided to leave the company formerly known as Twitter several months ago and was working with Chief Executive Linda Yaccarino through the transition.

“After more than 10 years, tomorrow will be my last day at X. It’s been an incredible journey,” he wrote on X late Thursday night.

Nick Pickles, who joined the company in 2014, said he had decided to leave X several months ago. REUTERS
It was not immediately clear what his next plans were, or why he made the decision to quit.

X and Pickles did not immediately respond to requests for comment.

In June, X’s head of business operations, Joe Benarroch left the company, the Wall Street Journal reported, citing people familiar with the matter.

China poised to embed 'Communist Party spies' inside US firms — including Microsoft, critics warn

“They can tell a [Chinese intelligence] agent, ‘This is the person you need to talk to. If you want to coerce them, here’s who they are,’” a House aide briefed on the law

China’s government has positioned itself to embed “Communist Party spies” at Microsoft and other US companies that do business in the country — and further expose them to theft of trade secrets, employee poaching and even scary intimidation tactics, The Post has learned.

#technology #china #microsoft #espioinage

A new version of Beijing’s “company law” that took effect July 1 — a clampdown that has seen scant coverage in the Western press, according to experts — requires multinational firms with more than 300 workers in the country to appoint an “employee representative” to their China affiliate’s board of directors.

Sources tell The Post that the “representatives” are almost certain to be in regular contact with Chinese authorities — if not outright members of the Chinese Communist Party. That, in turn, would give Beijing a direct line into the sensitive internal workings of American firms.

The new rules, which can alternatively require that companies adopt a local supervisory board for workers, could also give the CCP frightening tools to exert control over US companies’ employees, according to US officials.

“They can tell a [Chinese intelligence] agent, ‘This is the person you need to talk to. If you want to coerce them, here’s who they are,’” a House aide briefed on the law told The Post. “’I have access to employee data. This is their wife. This is where they live. This is where the kid goes to school.’”

From Groq:

What is multimodal in AI training?

In AI training, multimodal refers to the ability of a model to process and learn from multiple types of data or modalities simultaneously. This can include, but is not limited to:

  1. Visual data: images, videos, and graphics
  2. Text data: natural language, text, and speech
  3. Audio data: music, speech, and other sounds
  1. Sensor data: sensor readings from devices, such as temperature, pressure, and motion sensors
  2. Time-series data: sequential data, such as stock prices, weather patterns, and traffic flow

Multimodal AI models can learn to recognize patterns, relationships, and correlations between different modalities, enabling them to perform tasks that require the integration of multiple sources of information. This can lead to more accurate and robust decision-making, as well as improved performance in a wide range of applications.

Some examples of multimodal AI applications include:

  1. image captioning: generating text descriptions of images
  2. Speech recognition: recognizing spoken language and transcribing it into text
  3. Visual question answering: answering questions about images
  4. Multimodal sentiment analysis: analyzing text, audio, and visual data to determine sentiment and emotions
  5. Autonomous vehicles: using sensor data from cameras, lidar, radar, and other sources to navigate and make decisions

The benefits of multimodal AI training include:

  1. Improved accuracy: by combining multiple sources of information, models can make more informed decisions
  2. Increased robustness: multimodal models can be more resistant to noise, outliers, and other sources of error
  3. Enhanced interpretability: multimodal models can provide insights into the relationships between different modalities
  4. Broader applicability: multimodal models can be applied to a wide range of domains and tasks

However, multimodal AI training also presents some challenges, such as:

  1. Data integration: combining data from different modalities can be complex and require significant data preprocessing and cleaning
  2. Model complexity: multimodal models can be more complex and require more computational resources than unimodal models
  3. Training data quality: the quality of the training data can have a significant impact on the performance of multimodal models

To overcome these challenges, researchers and practitioners are developing new techniques and architectures for multimodal AI training, such as:

  1. Multimodal fusion: combining data from different modalities using techniques such as concatenation, attention, and fusion
  2. Multimodal translation: translating data from one modality to another, such as translating text to speech
  3. Multimodal embeddings: learning shared representations across different modalities
  4. Multimodal attention: focusing on specific modalities or features when processing multimodal data

Overall, multimodal AI training has the potential to revolutionize many areas of AI research and application, enabling more accurate, robust, and interpretable decision-making in a wide range of domains.

Why is multimodal more expensive to train than text?

Multimodal AI models, which process and learn from multiple types of data or modalities, can be more expensive to train than text-only models for several reasons:

  1. Data size and complexity: Multimodal data can be much larger and more complex than text data. For example, images and videos require more storage space and computational resources than text data. This can lead to increased costs for data storage, processing, and transmission.
  1. Computational requirements: Multimodal models often require more powerful hardware and computational resources to process and analyze the data. This can include:
    • Graphics Processing Units (GPUs): Multimodal models often require specialized GPUs to handle the computational demands of processing images, videos, and other visual data.
    • Tensor Processing Units (TPUs): TPUs are designed specifically for machine learning workloads and can accelerate the processing of large datasets.
    • High-performance computing clusters: Large-scale multimodal models may require access to high-performance computing clusters to process and analyze the data.
  1. Model complexity: Multimodal models can be more complex and require more parameters than text-only models. This can lead to increased computational requirements and training times.
  2. Training data quality: Multimodal data can be more challenging to collect and preprocess than text data. This can lead to increased costs for data annotation, labeling, and cleaning.
  1. Model architecture: Multimodal models often require more sophisticated architectures to handle the complexity of the data. This can include:
    • Convolutional neural networks (CNNs): CNNs are commonly used for image and video processing and can be computationally expensive to train.
    • Recurrent Neural networks (RNNs): RNNs are commonly used for sequential data such as speech and text, and can be computationally expensive to train.
    • Attention mechanisms: Attention mechanisms are often used in multimodal models to focus on specific parts of the input data. This can add complexity and computational requirements to the model.
  1. Hyperparameter tuning: Multimodal models often require more extensive hyperparameter tuning to optimize performance. This can involve more iterations of training, testing, and evaluation, which can increase computational costs.
  2. Model evaluation: Multimodal models often require more extensive evaluation and testing to ensure performance and accuracy. This can involve more data collection, annotation, and processing, which can increase costs.

To mitigate these costs, researchers and practitioners are exploring various strategies, such as:

  1. Using transfer learning: Pre-trained models can be fine-tuned on smaller datasets, reducing the need for large-scale training.
  2. Using cloud-based services: Cloud-based services such as Google cloud AI Platform, Amazon SageMaker, and Microsoft Azure Machine Learning can provide access to scalable computing resources and reduce the need for on-premises infrastructure.
  1. Using distributed computing: Distributed computing frameworks such as Apache Spark and Hadoop can enable parallel processing and reduce training times.
  2. Using more efficient algorithms: Researchers are developing more efficient algorithms and models that can reduce computational requirements and training times.
  3. Using data augmentation: Data augmentation techniques can increase the size and diversity of the training dataset, reducing the need for large-scale data collection and preprocessing.

By exploring these strategies, researchers and practitioners can reduce the costs associated with training multimodal AI models and make them more accessible and practical for a wider range of applications.

Apple’s Visual Intelligence is a built-in take on Google Lens

Apple Intelligence will identify real-world objects for you.

Apple has announced a new feature called Visual Intelligence that will be part of iOS 18’s Apple Intelligence suite of AI features “later this year.” The feature works much like similar features offered by other multimodal AI systems from Google or OpenAI.

#apple #ai #google #images

Visual Intelligence lets you “instantly learn about everything you see,” Apple’s Craig Federighi said during the company’s September event today. Federighi said the feature is “enabled by Camera Control,” which is the company’s name for a new capacitive camera button that’s now on the side of the iPhone 16 and 16 Pro phones. To trigger it, users will need to click and hold the button, then point the phone’s camera at whatever they’re curious about.

iPhones use a “combination of on-device intelligence and Apple services that never store your images” to power Visual Intelligence and let you take a picture of a restaurant to get info about its hours. Point your camera at a flyer, and “details like title, date, and location are automatically recorded,” he said. Federighi added that the feature is “also your gateway to third-party” models, which suggests using Visual Intelligence to search Google for a bike that you find out in the wild or take a picture of study notes to get help with a concept.

Apple didn’t announce when the feature would debut beyond that it’s “coming to Camera Control later this year.”

Roblox announces AI tool for generating 3D game worlds from text

New AI feature aims to streamline game creation on popular online platform.

On Friday, Roblox announced plans to introduce an open source generative AI tool that will allow game creators to build 3D environments and objects using text prompts, reports MIT Tech Review. The feature, which is still under development, may streamline the process of creating game worlds on the popular online platform, potentially opening up more aspects of game creation to those without extensive 3D design skills.

#ai #technology #robolox #gaming

Roblox has not announced a specific launch date for the new AI tool, which is based on what it calls a "3D foundational model." The company shared a demo video of the tool where a user types, "create a race track," then "make the scenery a desert," and the AI model creates a corresponding model in the proper environment.

The system will also reportedly let users make modifications, such as changing the time of day or swapping out entire landscapes, and Roblox says the multimodal AI model will ultimately accept video and 3D prompts, not just text.

Next-token prediction in 3D

Roblox's 3D foundational model approach involves a custom next-token prediction model—a foundation not unlike the large language models (LLMs) that power ChatGPT. Tokens are fragments of text data that LLMs use to process information. Roblox's system "tokenizes" 3D blocks by treating each block as a numerical unit, which allows the AI model to predict the most likely next structural 3D element in a sequence. In aggregate, the technique can build entire objects or scenery.

Anupam Singh, vice president of AI and growth engineering at Roblox, told MIT Tech Review about the challenges in developing the technology. "Finding high-quality 3D information is difficult," Singh said. "Even if you get all the data sets that you would think of, being able to predict the next cube requires it to have literally three dimensions, X, Y, and Z."