设为首页加入收藏
  • 首页
  • Start up
  • 当前位置:首页 >Start up >【】

    【】

    发布时间:2025-09-12 14:05:37 来源:都市天下脉观察 作者:Start up

    Latest

    AI

    Amazon

    Apps

    Biotech & Health

    Climate

    Cloud Computing

    Commerce

    Crypto

    Enterprise

    EVs

    Fintech

    Fundraising

    Gadgets

    Gaming

    Google

    Government & Policy

    Hardware

    Instagram

    Layoffs

    Media & Entertainment

    Meta

    Microsoft

    Privacy

    Robotics

    Security

    Social

    Space

    Startups

    TikTok

    Transportation

    Venture

    More from TechCrunch

    Staff

    Events

    Startup Battlefield

    StrictlyVC

    Newsletters

    Podcasts

    Videos

    Partner Content

    TechCrunch Brand Studio

    Crunchboard

    Contact Us

    Image Credits:Reliant
    AI

    Reliant’s paper-scouring AI takes on science’s data drudgery

    Devin Coldewey 3:59 AM PDT · August 20, 2024

    AI models have proven capable of many things, but what tasks do we actually want them doing? Preferably drudgery — and there’s plenty of that in research and academia. Reliant hopes to specialize in the kind of time-consuming data extraction work that’s currently a specialty of tired grad students and interns.

    “The best thing you can do with AI is improve the human experience: reduce menial labor and let people do the things that are important to them,” said CEO Karl Moritz Hermann. In the research world, where he and co-founders Marc Bellemare and Richard Schlegel have worked for years, literature review is one of the most common examples of this “menial labor.”

    Every paper cites previous and related work, but finding these sources in the sea of science is not easy. And some, like systematic reviews, cite or use data from thousands.

    For one study, Hermann recalled, “The authors had to look at 3,500 scientific publications, and a lot of them ended up not being relevant. It’s a ton of time spent extracting a tiny amount of useful information — this felt like something that really ought to be automated by AI.”

    They knew that modern language models could do it: One experiment put ChatGPT on the task and found that it was able to extract data with an 11% error rate. Like many things LLMs can do, it’s impressive but nothing like what people actually need.

    Image Credits:Reliant AI

    “That’s just not good enough,” said Hermann. “For these knowledge tasks, menial as they may be, it’s very important that you don’t make mistakes.”

    Reliant’s core product, Tabular, is based on an LLM in part (Llama 3.1), but augmented with other proprietary techniques, is considerably more effective. On the multi-thousand-study extraction above, they said it did the same task with zero errors.

    Techcrunch event

    Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025

    Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668.

    Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025

    Netflix, Box, a16z, ElevenLabs, Wayve, Sequoia Capital, Elad Gil — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before Sept 26 to save up to $668.

    San Francisco | October 27-29, 2025 REGISTER NOW

    What that means is you dump a thousand documents in, say you want this, that, and the other data out of them, and Reliant pores through them and finds that information — whether it’s perfectly labeled and structured or (far more likely) it isn’t. Then it pops all that data and any analyses you wanted done into a nice UI so you can dive down into individual cases.

    “Our users need to be able to work with all the data all at once, and we’re building features to allow them to edit the data that’s there, or go from the data to the literature; we see our role as helping the users find where to spend their attention,” Hermann said.

    Image Credits:Reliant

    This tailored and effective application of AI — not as splashy as a digital friend but almost certainly much more viable — could accelerate science across a number of highly technical domains. Investors have taken note, funding an $11.3 million seed round; Tola Capital and Inovia Capital led the round, with angel Mike Volpi participating.

    Like any application of AI, Reliant’s tech is very compute-intensive, which is why the company has bought its own hardware rather than renting it a la carte from one of the big providers. Going in-house with hardware offers both risk and reward: You have to make these expensive machines pay for themselves, but you get the chance to crack open the problem space with dedicated compute.

    “One thing that we’ve found is it’s very challenging to give a good answer if you have limited time to give that answer,” Hermann explained — for instance, if a scientist asks the system to perform a novel extraction or analysis task on a hundred papers. It can be done quickly, or well, but not both — unless they predict what users might ask and figure out the answer, or something like it, ahead of time.

    “The thing is, a lot of people have the same questions, so we can find the answers before they ask, as a starting point,” said Bellemare, the startup’s chief science officer. “We can distill 100 pages of text into something else, that may not be exactly what you want, but it’s easier for us to work with.”

    Think about it this way: If you were going to extract the meaning from a thousand novels, would you wait until someone asked for the characters’ names to go through and grab them? Or would you just do that work ahead of time (along with things like locations, dates, relationships, etc.) knowing the data would likely be wanted? Certainly the latter — if you had the compute to spare.

    This pre-extraction also gives the models time to resolve the inevitable ambiguities and assumptions found in different scientific domains. When one metric “indicates” another, it may not mean the same thing in pharmaceuticals as it does in pathology or clinical trials. Not only that, but language models tend to give different outputs depending on how they’re asked certain questions. So Reliant’s job has been to turn ambiguity into certainty — “and this is something you can only do if you’re willing to invest in a particular science or domain,” Hermann noted.

    As a company, Reliant’s first focus is on establishing that the tech can pay for itself before attempting anything more ambitious. “In order to make interesting progress, you have to have a big vision but you also need to start with something concrete,” said Hermann. “From a startup survival point of view, we focus on for-profit companies, because they give us money to pay for our GPUs. We’re not selling this at a loss to customers.”

    One might expect the firm to feel the heat from companies like OpenAI and Anthropic, which are pouring money into handling more structured tasks like database management and coding, or from implementation partners like Cohere and Scale. But Bellemare was optimistic: “We’re building this on a groundswell — any improvement in our tech stack is great for us. The LLM is one of maybe eight large machine learning models in there — the others are fully proprietary to us, made from scratch on data propriety to us.”

    The transformation of the biotech and research industry into an AI-driven one is certainly only beginning and may be fairly patchwork for years to come. But Reliant seems to have found a strong footing to start from.

    “If you want the 95% solution, and you just apologize profusely to one of your customers once in a while, great,” said Hermann. “We’re for where precision and recall really matter, and where mistakes really matter. And frankly, that’s enough; we’re happy to leave the rest to others.”

    (This story originally had Hermann’s name incorrect — my own error, I have changed it throughout.)

    • 上一篇:OG App, what exactly was your end game here?
    • 下一篇:Beyond Meat to cut 19% of its workforce amid sales slump

      相关文章

      • Coho AI, which uses AI to help B2B SaaS companies boost revenue, raises $8.5M
      • Kelvin wants to help save the planet by applying AI to home energy audits
      • Taloflow puts AI to work on software vendor selection to reduce costs and save time
      • Dust grabs another $16M for its enterprise AI assistants connected to internal data
      • Series C is the new venture
      • Fearless Fund's Arian Simone coming to Disrupt 2024
      • Three ways founders can shine at TechCrunch Disrupt 2024
      • InMobi eyes $10 billion valuation in 2025 India IPO
      • Why members
      • Google invests $350M in Indian e

        随便看看

      • Dispute between founders and board leaves Capiter in arrears to employees and creditors
      • Startups Weekly: Musk raises $6B for AI and the fintech dominoes are falling
      • 5 days left to get your early
      • The Way app offers a chance to meditate alongside a Zen master
      • Reverion eyes commercial launch to draw more energy out of biogas
      • Novel battery manufacturer EnerVenue is raising $515M, per filing
      • Startups Weekly: Drama at Techstars. Drama in AI. Drama everywhere.
      • A company building Wordle for chess raises money from a16z Speedrun, Mark Pincus and Eric Wu
      • Tres raises $7.6M to help web3 teams manage their financials and crypto data
      • Fintech Fragment eases ledger problems, nabs $9M from Jack Altman, BoxGroup, others
      • Copyright © 2025 Powered by 【】,都市天下脉观察   辽ICP备198741324484号sitemap