Category: TECH

  • Sergey Brin says RTO is key to Google winning the AGI race

    Sergey Brin says RTO is key to Google winning the AGI race

    [ad_1]

    Google co-founder Sergey Brin sent a memo to employees this week urging them to return to the office “at least every weekday” in order to help the company win the AGI race, The New York Times reports. Brin told employees that working 60 hours a week is a “sweet spot” for productivity.

    While Brin’s memo is not an official policy change for Google, which requires workers to come to work in person three days a week, it does show the pressure Silicon Valley giants are feeling to compete in AI. The memo also indicates that Brin believes Google could build AGI, a superintelligent AI system on par with human intelligence.

    Brin has reportedly returned to Google in recent years to help the company regain its footing in the AI race. Google was caught by surprise by OpenAI’s 2022 release of ChatGPT, but has worked diligently to catch up with industry leading AI models of its own.

    [ad_2]

    Source link

  • How much does ChatGPT cost? Everything you need to know about OpenAI’s pricing plans

    How much does ChatGPT cost? Everything you need to know about OpenAI’s pricing plans

    [ad_1]

    OpenAI’s AI-powered chatbot platform ChatGPT keeps expanding with new features. The chatbot’s memory feature lets you save preferences so that chats are more tailored to you. ChatGPT also has an upgraded voice mode, letting you interact with the platform more or less in real time. It even offers a store — the GPT Store — for AI-powered applications and services.

    So, you might be wondering: How much does ChatGPT cost? It’s a tougher question to answer than you might think. OpenAI offers an array of plans for ChatGPT, both paid and free, aimed at customers ranging from individuals to nonprofits, small- and medium-sized businesses, educational institutions, and enterprises.

    To keep track of the various ChatGPT subscription options available, we’ve put together a guide on ChatGPT pricing. We’ll keep it updated as new plans are introduced.

    ChatGPT free

    Once upon a time, the free version of ChatGPT was quite limited in what it could do. But that’s changed as OpenAI has rolled out new capabilities and underlying generative AI models.

    ChatGPT free users get access to OpenAI’s GPT-4o mini model, responses augmented with content from the web, access to the GPT Store, and the ability to upload files and photos and ask questions about those uploads. Free users also have limited access to more advanced features, including Advanced Voice mode, GPT-4o, and o3-mini. Users can also store chat preferences as “memories” and leverage advanced data analysis, a ChatGPT feature that can “reason over” (i.e., analyze data from) files such as spreadsheets and PDFs.

    There are downsides that come with the free ChatGPT plan, however, including daily capacity limits on the GPT-4o model and file uploads, depending on demand. ChatGPT free users also miss out on more advanced features, which we discuss in greater detail below.

    ChatGPT Plus

    For individual users who want a more capable ChatGPT, there’s ChatGPT Plus, which costs $20 per month.

    ChatGPT Plus offers higher capacity than ChatGPT free — users can send 80 messages to GPT-4o every three hours and unlimited messages to GPT4o-mini — plus access to OpenAI’s reasoning models, including o3-mini, o1-preview, and o1-mini.

    Subscribers to ChatGPT Plus also get access to multimodal features, such as Advanced Voice mode with video and screen sharing, although they may run into daily limits.

    ChatGPT Plus subscribers also get limited access to newer tools, including OpenAI’s deep research agent and Sora’s video generation.

    In addition, ChatGPT Plus subscribers get an upgraded data analysis feature, underpinned by GPT-4o, that can create interactive charts and tables from datasets. Users can upload the files to be analyzed directly from Google Drive and Microsoft OneDrive or from their devices.

    ChatGPT Pro

    For people who want near-unlimited access to OpenAI’s products, and the chance to try new features out first, there’s ChatGPT Pro. The plan costs $200 a month.

    Subscribers to ChatGPT Pro get unlimited access to reasoning models, GPT-4o, and Advanced Voice mode. The $200 tier also comes with 120 deep research queries a month, as well as access to o1 pro mode, which uses more compute than the version of o1 available in ChatGPT plus.

    ChatGPT Pro users also get access to OpenAI’s web-browsing agent, Operator, and more video generations with Sora.

    OpenAI tends to release most of its new features to ChatGPT Pro users first, and these users get priority access to existing features, such as GPT-4o, during times of high demand.

    ChatGPT Team

    Say you own a small business or manage an org and want more than one ChatGPT license, plus collaborative features. ChatGPT Team might fit the bill: It costs $30 per user per month or $25 per user per month billed annually for up to 149 users.

    ChatGPT Team provides a dedicated workspace and admin tools for team management. All users in a ChatGPT Team plan gain access to OpenAI’s latest models and the aforementioned tools that let ChatGPT analyze, edit and extract info from files. Beyond this, ChatGPT Team lets people within a team build and share custom apps — similar to the apps in the GPT Store — based on OpenAI models. These apps can be tailored for specific use cases or departments, or tuned on a team’s data.

    ChatGPT Enterprise

    Large organizations — any organization in need of more than 149 ChatGPT licenses, to be specific — can opt for ChatGPT Enterprise, OpenAI’s corporate-focused ChatGPT plan. OpenAI doesn’t publish the price of ChatGPT Enterprise, but the reported cost is around $60 per user per month with a minimum of 150 users and a 12-month contract.

    ChatGPT Enterprise adds “enterprise-grade” privacy and data analysis capabilities on top of the vanilla ChatGPT, as well as enhanced performance and customization options. There’s a dedicated workspace and admin console with tools to manage how employees within an organization use ChatGPT, including integrations for single sign-on, domain verification and a dashboard showing usage and engagement statistics.

    Shareable conversation templates provided as a part of ChatGPT Enterprise allow users to build internal workflows and bots leveraging ChatGPT, while credits to OpenAI’s API platform let companies create fully custom ChatGPT-powered solutions if they choose.

    ChatGPT Enterprise customers also get priority access to models and lines to OpenAI expertise, including a dedicated account team, training, and consolidated invoicing. And they’re eligible for Business Associate Agreements with OpenAI, which are required by U.S. law for companies that wish to use tools like ChatGPT with private health information such as medical records.

    ChatGPT Edu

    ChatGPT Edu, a newer offering from OpenAI, delivers a version of ChatGPT built for universities and the students attending them — as well as faculty, staff researchers and campus operations teams. Pricing hasn’t been made public or reported secondhand yet, but we’ll update this section if it is.

    ChatGPT Edu is comparable to ChatGPT Enterprise with the exception that it supports SCIM, an open protocol used to simplify cloud identity and access management. (OpenAI plans to bring SCIM to ChatGPT Enterprise in the future.) As with ChatGPT Enterprise, ChatGPT Edu customers get data analysis tools, admin controls, single sign-on, enhanced security and the ability to build and share custom chatbots.

    ChatGPT Edu also comes with the latest OpenAI models and, importantly, increased message limits.

    OpenAI for Nonprofits

    OpenAI for Nonprofits is OpenAI’s early foray into nonprofit tech solutions. It’s not a stand-alone ChatGPT plan so much as a range of discounts for eligible organizations.

    Nonprofits can access ChatGPT Team at a discounted rate of $20 monthly per user. Larger nonprofits can get a 50% discount on ChatGPT Enterprise, which works out to about $30 per user.

    The eligibility requirements are quite strict, however. While nonprofits based anywhere in the world can apply for discounts, OpenAI isn’t currently accepting applications from academic, medical, religious or governmental institutions.

    This article was originally published on June 15, 2024. It was updated on February 25, 2025, to include new features from OpenAI, including o1 and deep research, as well as the new ChatGPT Pro plan.

    [ad_2]

    Source link

  • Did xAI lie about Grok 3’s benchmarks?

    Did xAI lie about Grok 3’s benchmarks?

    [ad_1]

    Debates over AI benchmarks — and how they’re reported by AI labs — are spilling out into public view.

    This week, an OpenAI employee accused Elon Musk’s AI company, xAI, of publishing misleading benchmark results for its latest AI model, Grok 3. One of the co-founders of xAI, Igor Babushkin, insisted that the company was in the right.

    The truth lies somewhere in between.

    In a post on xAI’s blog, the company published a graph showing Grok 3’s performance on AIME 2025, a collection of challenging math questions from a recent invitational mathematics exam. Some experts have questioned AIME’s validity as an AI benchmark. Nevertheless, AIME 2025 and older versions of the test are commonly used to probe a model’s math ability.

    xAI’s graph showed two variants of Grok 3, Grok 3 Reasoning Beta and Grok 3 mini Reasoning, beating OpenAI’s best-performing available model, o3-mini-high, on AIME 2025. But OpenAI employees on X were quick to point out that xAI’s graph didn’t include o3-mini-high’s AIME 2025 score at “cons@64.”

    What is cons@64, you might ask? Well, it’s short for “consensus@64,” and it basically gives a model 64 tries to answer each problem in a benchmark and takes the answers generated most frequently as the final answers. As you can imagine, cons@64 tends to boost models’ benchmark scores quite a bit, and omitting it from a graph might make it appear as though one model surpasses another when in reality, that’s isn’t the case.

    Grok 3 Reasoning Beta and Grok 3 mini Reasoning’s scores for AIME 2025 at “@1” — meaning the first score the models got on the benchmark — fall below o3-mini-high’s score. Grok 3 Reasoning Beta also trails ever-so-slightly behind OpenAI’s o1 model set to “medium” computing. Yet xAI is advertising Grok 3 as the “world’s smartest AI.”

    Babushkin argued on X that OpenAI has published similarly misleading benchmark charts in the past — albeit charts comparing the performance of its own models. A more neutral party in the debate put together a more “accurate” graph showing nearly every model’s performance at cons@64:

    But as AI researcher Nathan Lambert pointed out in a post, perhaps the most important metric remains a mystery: the computational (and monetary) cost it took for each model to achieve its best score. That just goes to show how little most AI benchmarks communicate about models’ limitations — and their strengths.



    [ad_2]

    Source link

  • ‘For You’ feeds are not for creators, Patreon says

    ‘For You’ feeds are not for creators, Patreon says

    [ad_1]

    Patreon has continued on its crusade against algorithmic feeds with its latest State of Create report, a look at trends in the creator economy based on internal data, and it’s an effort creators can get behind.

    In its survey of 1000 creators and 2000 fans, the membership platform reported that 53% of creators think it is more difficult to reach their followers today than five years ago.

    This doesn’t come as a surprise. Celebrities have fought against Instagram’s video-centric, algorithmic feed, making it difficult even for the Kardashians to reach their fans. And if Kylie Jenner is having trouble connecting with her audience, then it’s even worse for creators who aren’t household names.

    Fans are frustrated with social platforms’ shift toward short-form video and the “For You” feed, which both have been pioneered by TikTok. According to Patreon’s survey, fans say that they are seeing more short-form content on social media than long-form content — but 52% of fans said they find long-form content more valuable and that overall, they would be more willing to pay for it. Long-form content also tends to generate more income via ad revenue share on YouTube, since platforms continue to struggle with short-form content monetization.

    This is the fundamental tension of today’s creator economy: platforms like TikTok have made it easier than ever to build an audience, but the sheer volume of algorithmically-served content means that once creators earn a fan’s attention, it’s hard to maintain it. If a fan follows a creator on TikTok or Instagram, they might not actually see the majority of that creator’s posts, since they’re drowned out by posts from people they don’t follow.

    That’s why, as creators told Patreon, they now prioritize quality and deeper connections with fans over metrics like follower counts, likes, and views — a shift from five years ago.

    “When you focus on the platform mitigating the relationship between the creator and the subscriber, what you’re essentially doing is giving the platform the power and the responsibility to decide what to send to whom, when,” Patreon CEO Jack Conte told TechCrunch when Instagram made major changes to its algorithmic feed in 2022. “And that’s the part of it that makes me angry as a creator. Because I’ve spent years, decades building communities on these platforms.”

    As more creators than ever try to make a living on the internet, a clear path toward connecting with fans is essential to monetize their businesses. But the dominance of algorithms often obstructs that path, forcing them to adapt their content to fit platform preferences. In fact, 78% of creators in the report said that ‘The Algorithm’ impacts what they create, and 56% admitted it has discouraged them from exploring their passions and interests.”

    Those challenges are compounded by the broader instability of social media platforms themselves. With TikTok in legal jeopardy, Meta overhauling its content moderation precedents, and X platforming fringe extremism, creators are growing more frustrated with the current state of social media. Direct-to-consumer content platforms like Patreon, Substack, and OnlyFans have made it easier for creators to control their content and to make money, yet it’s becoming harder to connect with the people who want to pay for their content in the first place.

    “‘The Algorithm’ doesn’t measure what people want,” said Karen X. Cheng, a Patreon creator, in the survey. “It measures what people pay attention to.”

    [ad_2]

    Source link

  • These researchers used NPR Sunday Puzzle questions to benchmark AI ‘reasoning’ models

    These researchers used NPR Sunday Puzzle questions to benchmark AI ‘reasoning’ models

    [ad_1]

    Every Sunday, NPR host Will Shortz, The New York Times’ crossword puzzle guru, gets to quiz thousands of listeners in a long-running segment called the Sunday Puzzle. While written to be solvable without too much foreknowledge, the brainteasers are usually challenging even for skilled contestants.

    That’s why some experts think they’re a promising way to test the limits of AI’s problem-solving abilities.

    In a recent study, a team of researchers hailing from Wellesley College, Oberlin College, the University of Texas at Austin, Northeastern University, Charles University, and startup Cursor created an AI benchmark using riddles from Sunday Puzzle episodes. The team says their test uncovered surprising insights, like that reasoning models — OpenAI’s o1, among others — sometimes “give up” and provide answers they know aren’t correct.

    “We wanted to develop a benchmark with problems that humans can understand with only general knowledge,” Arjun Guha, a computer science faculty member at Northeastern and one of the co-authors on the study, told TechCrunch.

    The AI industry is in a bit of a benchmarking quandary at the moment. Most of the tests commonly used to evaluate AI models probe for skills, like competency on PhD-level math and science questions, that aren’t relevant to the average user. Meanwhile, many benchmarks — even benchmarks released relatively recently — are quickly approaching the saturation point.

    The advantages of a public radio quiz game like the Sunday Puzzle is that it doesn’t test for esoteric knowledge, and the challenges are phrased such that models can’t draw on “rote memory” to solve them, explained Guha.

    “I think what makes these problems hard is that it’s really difficult to make meaningful progress on a problem until you solve it — that’s when everything clicks together all at once,” Guha said. “That requires a combination of insight and a process of elimination.”

    No benchmark is perfect, of course. The Sunday Puzzle is U.S. centric and English only. And because the quizzes are publicly available, it’s possible that models trained on them can “cheat” in a sense, although Guha says he hasn’t seen evidence of this.

    “New questions are released every week, and we can expect the latest questions to be truly unseen,” he added. “We intend to keep the benchmark fresh and track how model performance changes over time.”

    On the researchers’ benchmark, which consists of around 600 Sunday Puzzle riddles, reasoning models such as o1 and DeepSeek’s R1 far outperform the rest. Reasoning models thoroughly fact-check themselves before giving out results, which helps them avoid some of the pitfalls that normally trip up AI models. The trade-off is that reasoning models take a little longer to arrive at solutions — typically seconds to minutes longer.

    At least one model, DeepSeek’s R1, gives solutions it knows to be wrong for some of the Sunday Puzzle questions. R1 will state verbatim “I give up,” followed by an incorrect answer chosen seemingly at random — behavior this human can certainly relate to.

    The models make other bizarre choices, like giving a wrong answer only to immediately retract it, attempt to tease out a better one, and fail again. They also get stuck “thinking” forever and give nonsensical explanations for answers, or they arrive at a correct answer right away but then go on to consider alternative answers for no obvious reason.

    “On hard problems, R1 literally says that it’s getting ‘frustrated,’” Guha said. “It was funny to see how a model emulates what a human might say. It remains to be seen how ‘frustration’ in reasoning can affect the quality of model results.”

    NPR benchmark
    R1 getting “frustrated” on a question in the Sunday Puzzle challenge set.Image Credits:Guha et al.

    The current best-performing model on the benchmark is o1 with a score of 59%, followed by the recently released o3-mini set to high “reasoning effort” (47%). (R1 scored 35%.) As a next step, the researchers plan to broaden their testing to additional reasoning models, which they hope will help to identify areas where these models might be enhanced.

    NPR benchmark
    The scores of the models the team tested on their benchmark.Image Credits:Guha et al.

    “You don’t need a PhD to be good at reasoning, so it should be possible to design reasoning benchmarks that don’t require PhD-level knowledge,” Guha said. “A benchmark with broader access allows a wider set of researchers to comprehend and analyze the results, which may in turn lead to better solutions in the future. Furthermore, as state-of-the-art models are increasingly deployed in settings that affect everyone, we believe everyone should be able to intuit what these models are — and aren’t — capable of.”

    [ad_2]

    Source link

  • UK drops ‘safety’ from its AI body, now called AI Security Institute, inks MOU with Anthropic

    UK drops ‘safety’ from its AI body, now called AI Security Institute, inks MOU with Anthropic

    [ad_1]

    The U.K. government wants to make a hard pivot into boosting its economy and industry with AI, and as part of that, it’s pivoting an institution that it founded a little over a year ago for a very different purpose. Today the Department of Science, Industry and Technology announced that it would be renaming the AI Safety Institute to the “AI Security Institute.” With that, it will shift from primarily exploring areas like existential risk and bias in Large Language Models, to a focus on cybersecurity, specifically “strengthening protections against the risks AI poses to national security and crime.”

    Alongside this, the government also announced a new partnership with Anthropic. No firm services announced but MOU indicates the two will “explore” using Anthropic’s AI assistant Claude in public services; and Anthropic will aim to contribute to work in scientific research and economic modelling. And at the AI Security Institute, it will provide tools to evaluate AI capabilities in the context of identifying security risks.

    “AI has the potential to transform how governments serve their citizens,” Anthropic co-founder and CEO Dario Amodei said in a statement. “We look forward to exploring how Anthropic’s AI assistant Claude could help UK government agencies enhance public services, with the goal of discovering new ways to make vital information and services more efficient and accessible to UK residents.”

    Anthropic is the only company being announced today — coinciding with a week of AI activities in Munich and Paris — but it’s not the only one that is working with the government. A series of new tools that were unveiled in January were all powered by OpenAI. (At the time, Peter Kyle, the Secretary of State for Technology, said that the government planned to work with various foundational AI companies, and that is what the Anthropic deal is proving out.) 

    The government’s switch-up of the AI Safety Institute — launched just over a year ago with a lot of fanfare — to AI Security shouldn’t come as too much of a surprise. 

    When the newly-installed Labour government announced its AI-heavy Plan for Change in January,  it was notable that the words  “safety,” “harm,” “existential,” and “threat” did not appear at all in the document. 

    That was not an oversight. The government’s plan is to kickstart investment in a more modernized economy, using technology and specifically AI to do that. It wants to work more closely with Big Tech, and it also wants to build its own homegrown big techs. The main messages it’s been promoting have development, AI, and more development. Civil Servants will have their own AI assistant called “Humphrey,” and they’re being encouraged to share data and use AI in other areas to speed up how they work. Consumers will be getting digital wallets for their government documents, and chatbots. 

    So have AI safety issues been resolved? Not exactly, but the message seems to be that they can’t be considered at the expense of progress.

    The government claimed that despite the name change, the song will remain the same.

    “The changes I’m announcing today represent the logical next step in how we approach responsible AI development – helping us to unleash AI and grow the economy as part of our Plan for Change,” Kyle said in a statement. “The work of the AI Security Institute won’t change, but this renewed focus will ensure our citizens – and those of our allies – are protected from those who would look to use AI against our institutions, democratic values, and way of life.”

    “The Institute’s focus from the start has been on security and we’ve built a team of scientists focused on evaluating serious risks to the public,” added Ian Hogarth, who remains the chair of the institute. “Our new criminal misuse team and deepening partnership with the national security community mark the next stage of tackling those risks.“

    Further afield, priorities definitely appear to have changed around the importance of “AI Safety”. The biggest risk the AI Safety Institute in the U.S. is contemplating right now, is that it’s going to be dismantled. U.S. Vice President J.D. Vance telegraphed as much just earlier this week during his speech in Paris.

    [ad_2]

    Source link

  • Plaid working with Goldman Sachs on raising $300M to $400M in tender offer

    Plaid working with Goldman Sachs on raising $300M to $400M in tender offer

    [ad_1]

    Plaid, a company that connects bank accounts to financial applications, is working on a deal to allow early-stage investors and employees to sell their stock to Goldman Sachs, which will raise between $300 million and $400 million, Bloomberg reported citing sources.

    The tender offer, as such deals are called, will likely value the company lower than its previous financing round.  Plaid raised a $425 million Series D at a post-money valuation of $13.4 billion in April 2021 in a deal led by Altimeter Capital.

    But since then, higher interest rates have led to lower valuations for many fintech startups.

    Plaid didn’t immediately respond to a request for comment.

    While Plaid initially focused on fintech clients, its customer base now includes established financial companies like H&R Block, Western Union, and Citi. The company’s revenue increased more than 25% in 2024, Bloomberg reported last month.

    Correction: An earlier version of this story said that Goldman will buy the shares for $300 million to $400 million.

    [ad_2]

    Source link

  • Anthropic CEO says DeepSeek was ‘the worst’ on a critical bioweapons data safety test

    Anthropic CEO says DeepSeek was ‘the worst’ on a critical bioweapons data safety test

    [ad_1]

    Anthropic’s CEO Dario Amodei is worried about competitor DeepSeek, the Chinese AI company that took Silicon Valley by storm with its R1 model. And his concerns could be more serious than the typical ones raised about DeepSeek sending user data back to China. 

    In an interview on Jordan Schneider’s ChinaTalk podcast, Amodei said DeepSeek generated rare information about bioweapons in a safety test run by Anthropic.

    DeepSeek’s performance was “the worst of basically any model we’d ever tested,” Amodei claimed. “It had absolutely no blocks whatsoever against generating this information.”

    Amodei stated that this was part of evaluations Anthropic routinely runs on various AI models to assess their potential national security risks. His team looks at whether models can generate bioweapons-related information that isn’t easily found on Google or in textbooks. Anthropic positions itself as the AI foundational model provider that takes safety seriously.

    Amodei said he didn’t think DeepSeek’s models today are “literally dangerous” in providing rare and dangerous information but that they might be in the near future. Although he praised DeepSeek’s team as “talented engineers,” he advised the company to “take seriously these AI safety considerations.”

    Amodei has also supported strong export controls on chips to China, citing concerns that they could give China’s military an edge.

    Amodei didn’t clarify in the ChinaTalk interview which DeepSeek model Anthropic tested, nor did he give more technical details about these tests. Anthropic didn’t immediately reply to a request for comment from TechCrunch. Neither did DeepSeek.

    DeepSeek’s rise has sparked concerns about its safety elsewhere, too. For example, Cisco security researchers said last week that DeepSeek R1 failed to block any harmful prompts in its safety tests, achieving a 100% jailbreak success rate.

    Cisco didn’t mention bioweapons but said it was able to get DeepSeek to generate harmful information about cybercrime and other illegal activities. It’s worth mentioning, though, that Meta’s Llama-3.1-405B and OpenAI’s GPT-4o also had high failure rates of 96% and 86%, respectively. 

    It remains to be seen whether safety concerns like these will make a serious dent in DeepSeek’s rapid adoption. Companies like AWS and Microsoft have publicly touted integrating R1 into their cloud platforms — ironically enough, given that Amazon is Anthropic’s biggest investor.

    On the other hand, there’s a growing list of countries, companies, and especially government organizations like the U.S. Navy and the Pentagon that have started banning DeepSeek. 

    Time will tell if these efforts catch on or if DeepSeek’s global rise will continue. Either way, Amodei says he does consider DeepSeek a new competitor that’s on the level of the U.S.’s top AI companies.

    “The new fact here is that there’s a new competitor,” he said on ChinaTalk. “In the big companies that can train AI — Anthropic, OpenAI, Google, perhaps Meta and xAI — now DeepSeek is maybe being added to that category.”

    [ad_2]

    Source link

  • Alphabet praises DeepSeek, but it’s massively ramping up its AI spending

    Alphabet praises DeepSeek, but it’s massively ramping up its AI spending

    [ad_1]

    Booming AI budgets seemed at risk last week when DeepSeek crashed Nvidia’s stock based on speculation that its cheaper AI models would lower demand for AI chips and data centers.

    Alphabet CEO Sundar Pichai has certainly noticed the Chinese AI company, praising its work as “tremendous” in Alphabet’s latest earnings call (while adding that some of Gemini’s models are just as efficient).

    But just like Meta, Alphabet isn’t throwing down the towel in Big Tech’s AI spending wars. In its latest earnings report, Alphabet announced it would boost capital expenditures to $75 billion — a whopping 42% increase — to accelerate its AI progress.

    Alphabet is betting that cheaper AI will massively boost demand for its services, rather than making it basically free and threatening its business models. The company noted it stands to benefit from this rise in usage — known as inference — thanks to its billions of existing users.

    “Part of the reason we are so excited about the AI opportunity is we know we can drive extraordinary use cases because the cost of actually using it is going to keep coming down, which will make more use cases feasible,” Pichai said during the earnings call. “And that’s the opportunity space. It’s as big as it comes, and that’s why you’re seeing us invest to meet that moment.”

    Meta CEO Mark Zuckerberg made similar comments in Meta’s earnings call last week, pledging to spend “hundreds of billions” on AI in the long term despite all the DeepSeek buzz.

    Whether this all pans out is unclear, but for now, tech giants can afford the AI bills, and when (or if) they’ll slow down is anyone’s guess.

    [ad_2]

    Source link

  • Senator warns of national security risks after Elon Musk’s DOGE granted ‘full access’ to sensitive Treasury systems

    Senator warns of national security risks after Elon Musk’s DOGE granted ‘full access’ to sensitive Treasury systems

    [ad_1]

    A senior U.S. lawmaker says representatives of Elon Musk were granted “full access” to a U.S. Treasury payments system used to disperse trillions of dollars to Americans each year, and warned that Musk’s access to the system poses a “national security risk.”

    Sen. Ron Wyden, a Democratic senator from Oregon and ranking member of the Senate Finance Committee, said in a post on Bluesky on Saturday that sources told his office Treasury Secretary Scott Bessent gave Musk’s team, known as the Department of Government Efficiency, or DOGE, authorization to access the highly sensitive Treasury system on Friday. The authorization comes following a standoff earlier in the week, in which the Treasury’s highest-ranking career official left the department following requests from Musk’s team for access to the system.

    “Social Security and Medicare benefits, grants, payments to government contractors, including those that compete directly with Musk’s own companies. All of it,” wrote Wyden in the post, referring to DOGE’s access.

    The New York Times also reported that Bessent granted DOGE access to the Treasury’s payment system on Friday. One of the DOGE representatives granted access is said to be Tom Krause, the chief executive of Cloud Software Group, which owns Citrix and several other companies. Krause did not return TechCrunch’s request for comment. A spokesperson for the Treasury did not comment when emailed Saturday.

    This is the latest effort by Musk and his associates to take over the inner workings of the U.S. federal government following President Trump’s return to office on January 20. Following his inauguration, Trump immediately ordered Musk to begin making widespread cuts to federal government spending.

    The system run by the Treasury’s Bureau of the Fiscal Service controls the disbursements of around $6 trillion in federal funds to American households, including Social Security and Medicare benefits, tax refunds, and payments to U.S. federal employees and contractors, according to a letter written by Wyden and sent to Bessent a day earlier. Access to the payments system was historically limited to a few staff because it contains personal information about millions of Americans who receive payments from the federal government, per the Times.

    According to Wyden’s letter, the payments system “simply cannot fail, and any politically-motivated meddling in them risks severe damage to our country and the economy.”

    In his letter, Wyden said he was concerned that Musk’s extensive business operations in China “endangers U.S. cybersecurity” and creates conflicts of interest that “make his access to these systems a national security risk.”

    Last year, the Biden administration blamed China for a series of intrusions targeting U.S. critical infrastructure, the theft of senior American officials’ phone records during breaches of several U.S. phone and internet giants, and a breach of the Treasury’s own systems late last year. Wyden, also a long-serving member of the Senate Intelligence Committee, said it was “unusual to be granting access to sensitive systems to an individual with such significant business interests in China.”

    Several other federal departments are under scrutiny by DOGE, including the federal government’s own human resources department, known as the Office of Personnel Management.

    Reuters reported on Friday that Musk’s representatives locked out career civil servants from computer systems, which contain the personal data and human resources files of millions of federal employees. The OPM was hacked in 2015, which the U.S. government later attributed to China, resulting in the theft of personnel records on more than 22 million U.S. government employees, including staff with security clearances.

    [ad_2]

    Source link