<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/">
    <channel>
        <copyright>Copyright TechTarget - All rights reserved</copyright>
        <description></description>
        <docs>https://cyber.law.harvard.edu/rss/rss.html</docs>
        <generator>Techtarget Feed Generator</generator>
        <language>en</language>
        <lastBuildDate>Tue, 14 Apr 2026 22:34:54 GMT</lastBuildDate>
        <link>https://www.techtarget.com/searchenterpriseai</link>
        <managingEditor>editor@techtarget.com</managingEditor>
        <item>
            <body>&lt;p&gt;Enterprise AI systems are increasingly capable of &lt;a href="https://www.techtarget.com/searchenterpriseai/definition/anomaly-detection"&gt;detecting anomalies&lt;/a&gt;, but most can't explain them. When automated decisions behave unexpectedly, organizations often struggle to determine whether the cause is model drift, data corruption, environmental change or rational adaptation. This diagnostic gap is emerging as one of the most significant operational risks in enterprise AI.&lt;/p&gt; 
&lt;p&gt;As organizations move from experimentation to large-scale automation, AI increasingly drives mission-critical workflows. From fraud detection to logistics planning, &lt;a href="https://www.techtarget.com/searchitoperations/tip/Beyond-automation-Using-GenAI-to-modernize-IT-operations"&gt;automated systems influence operational decisions&lt;/a&gt; every minute. But when outputs conflict with operational reality -- approving the wrong transaction, misclassifying a customer or triggering false alerts -- teams often struggle to determine why.&lt;/p&gt; 
&lt;p&gt;Most systems can detect that something unusual occurred. Few can explain it. That gap is quickly becoming a governance challenge.&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="Detection isn't diagnosis"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Detection isn't diagnosis&lt;/h2&gt;
 &lt;p&gt;Consider a real-time logistics system in which estimated arrival times suddenly spike in variance. Monitoring tools flag the anomaly immediately. But what caused it?&lt;/p&gt;
 &lt;p&gt;Was it caused by degraded GPS signals? A change in driver behavior? A modification in the road network? Or simply the model responding rationally to new traffic conditions?&lt;/p&gt;
 &lt;blockquote class="main-article-pullquote"&gt;
  &lt;div class="main-article-pullquote-inner"&gt;
   &lt;figure&gt;
    Without structured diagnostic intelligence, organizations risk misattributing failure.
   &lt;/figure&gt;
   &lt;i class="icon" data-icon="z"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/blockquote&gt;
 &lt;p&gt;Without diagnostic tools, engineering &lt;a href="https://www.techtarget.com/searchenterpriseai/tip/AI-model-optimization-How-to-do-it-and-why-it-matters"&gt;teams often default to retraining the model&lt;/a&gt;. Days later, they might discover that the true cause was an upstream data encoding change introduced by a third-party mapping provider. Engineering time is lost, and operator confidence declines.&lt;/p&gt;
 &lt;p&gt;This pattern appears across industries. Detection answers one question: Did something unexpected happen?&lt;/p&gt;
 &lt;p&gt;Diagnosis answers the more important one: Why did it happen?&lt;/p&gt;
 &lt;p&gt;Without structured diagnostic intelligence, organizations risk misattributing failures. A frontline operator is blamed for noncompliance when the root cause lies in outdated model assumptions. A data science team retrains a model unnecessarily when the underlying issue is ambiguous data encoding. Executives lose confidence in automation simply because the system can't explain its own behavior in operational terms.&lt;/p&gt;
 &lt;p&gt;In complex production environments, unexpected outcomes rarely have a single cause. They emerge from interactions among model behavior, data pipelines, environmental shifts and human adaptation. Treating every anomaly as model failure oversimplifies reality.&lt;/p&gt;
&lt;/section&gt;         
&lt;section class="section main-article-chapter" data-menu-title="The trust problem"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;The trust problem&lt;/h2&gt;
 &lt;p&gt;Successful enterprise AI depends not only on model performance but also on &lt;a href="https://www.techtarget.com/searchenterpriseai/feature/How-to-ensure-AI-transparency-explainability-and-trust"&gt;institutional trust&lt;/a&gt;.&lt;/p&gt;
 &lt;p&gt;When operators repeatedly encounter AI recommendations that appear inconsistent or unexplained, override behavior increases. Manual reviews return. Governance committees grow cautious. Automation initiatives slow down.&lt;/p&gt;
 &lt;p&gt;The technical issue becomes an organizational one.&lt;/p&gt;
 &lt;p&gt;Without diagnostic clarity, accountability becomes diffuse. Audit trails might show what happened, but they rarely explain why. As AI systems gain autonomy in operational workflows, this gap widens, increasing risk rather than reducing it.&lt;/p&gt;
&lt;/section&gt;     
&lt;section class="section main-article-chapter" data-menu-title="The diagnostic maturity check"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;The diagnostic maturity check&lt;/h2&gt;
 &lt;p&gt;Before &lt;a href="https://www.techtarget.com/searchenterpriseai/tip/Best-practices-for-building-scalable-AI-infrastructure"&gt;scaling automation&lt;/a&gt; further, technology leaders should ask the following four questions:&lt;/p&gt;
 &lt;ol class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Attribution. &lt;/b&gt;If a model produces highly unusual outputs, can the organization distinguish between a data pipeline failure and a genuine change in external conditions?&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Override analysis. &lt;/b&gt;Are teams analyzing why human operators override AI recommendations, or are they simply recording that overrides occurred?&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Drift versus context. &lt;/b&gt;Do monitoring tools treat deviations as binary errors, or can they determine whether the model is adapting rationally to new environmental constraints?&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Audit speed. &lt;/b&gt;If an auditor or regulator asks for the reasoning behind an automated decision, can the organization produce a clear explanation within minutes, or does it require a week of data analysis time?&lt;/li&gt; 
 &lt;/ol&gt;
 &lt;p&gt;The answers to these questions often reveal whether an organization truly understands how its AI systems behave in production.&lt;/p&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="What diagnostic intelligence looks like"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What diagnostic intelligence looks like&lt;/h2&gt;
 &lt;p&gt;Diagnostic intelligence means embedding structured reasoning into AI operations. It requires systems that can investigate their own behavior.&lt;/p&gt;
 &lt;p&gt;In practice, four capabilities define this approach.&lt;/p&gt;
 &lt;h3&gt;Behavioral stability monitoring&lt;/h3&gt;
 &lt;p&gt;Traditional monitoring focuses on accuracy metrics or threshold alerts. Diagnostic systems track how model behavior evolves over time, &lt;a href="https://www.techtarget.com/searchenterpriseai/tip/How-to-identify-and-manage-AI-model-drift"&gt;identifying patterns that signal drift&lt;/a&gt;, instability or environmental shifts.&lt;/p&gt;
 &lt;h3&gt;Data integrity validation&lt;/h3&gt;
 &lt;p&gt;Many AI failures originate upstream in data pipelines. Diagnostic systems verify that input representations align with business intent and that encoding changes or schema mismatches are detected before they propagate downstream.&lt;/p&gt;
 &lt;h3&gt;Contextual reality assessment&lt;/h3&gt;
 &lt;p&gt;Not all deviations indicate failure. Sometimes the environment changes while the model behaves rationally. Diagnostic frameworks incorporate external context, such as operational disruptions, regulatory changes or supply chain events, to evaluate model behavior relative to current conditions rather than historical baselines.&lt;/p&gt;
 &lt;h3&gt;Structured evidence aggregation&lt;/h3&gt;
 &lt;p&gt;True diagnosis requires combining signals from multiple subsystems. Diagnostic frameworks synthesize evidence from models, data pipelines and operational signals to produce traceable explanations.&lt;/p&gt;
 &lt;p&gt;Instead of a generic alert like &lt;i&gt;anomaly detected,&lt;/i&gt; a diagnostic system might report the following:&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Variance increase attributed to upstream data encoding changes affecting 14% of records between 9:00 and 11:30. Model behavior consistent with historical performance on clean data. Recommended action: data pipeline remediation rather than model retraining.&lt;/i&gt;&lt;/p&gt;
 &lt;p&gt;This added level of specificity transforms alerts into actionable investigations.&lt;/p&gt;
 &lt;blockquote class="main-article-pullquote"&gt;
  &lt;div class="main-article-pullquote-inner"&gt;
   &lt;figure&gt;
    Organizations that adopt diagnostic intelligence shift their response to AI deviations. Instead of asking who made the mistake, leaders ask more productive questions.
   &lt;/figure&gt;
   &lt;i class="icon" data-icon="z"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/blockquote&gt;
&lt;/section&gt;               
&lt;section class="section main-article-chapter" data-menu-title="From blame to continuous improvement"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;From blame to continuous improvement&lt;/h2&gt;
 &lt;p&gt;Organizations that adopt diagnostic intelligence shift their response to AI deviations. Instead of asking who made the mistake, leaders ask more productive questions, such as the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Did model assumptions drift?&lt;/li&gt; 
  &lt;li&gt;Did the operating environment change?&lt;/li&gt; 
  &lt;li&gt;Did data representations miscommunicate intent?&lt;/li&gt; 
  &lt;li&gt;Is the system behaving rationally under new constraints?&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;This shift reframes AI governance from reactive troubleshooting to continuous improvement.&lt;/p&gt;
 &lt;p&gt;Diagnostic layers reduce unnecessary retraining cycles. They preserve operator trust, strengthen auditability and, most importantly, they align automation with accountability.&lt;/p&gt;
&lt;/section&gt;     
&lt;section class="section main-article-chapter" data-menu-title="What CIOs should do next"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What CIOs should do next&lt;/h2&gt;
 &lt;p&gt;Enterprise leaders can begin building diagnostic maturity by focusing on several practical steps:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Treat anomaly detection as the starting point of AI governance, not the final safeguard.&lt;/li&gt; 
  &lt;li&gt;Establish diagnostic workflows that investigate system behavior before retraining models.&lt;/li&gt; 
  &lt;li&gt;Track operator overrides as signals of system misalignment.&lt;/li&gt; 
  &lt;li&gt;Build audit trails that explain why decisions occurred, not just what decisions were made.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;This approach transforms AI monitoring from simple alerting into &lt;a href="https://www.techtarget.com/searchbusinessanalytics/definition/operational-business-intelligence"&gt;operational intelligence&lt;/a&gt;.&lt;/p&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="What's next for enterprise AI"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What's next for enterprise AI&lt;/h2&gt;
 &lt;p&gt;The first wave of enterprise AI focused on prediction. The second emphasized automation. The next phase must focus on diagnosis.&lt;/p&gt;
 &lt;p&gt;As AI systems increasingly influence financial approvals, operational workflows and safety-critical decisions, the cost of misattributed failures grows. Detection alone is no longer sufficient. Organizations need AI systems capable of explaining and correcting their own behavior.&lt;/p&gt;
 &lt;p&gt;For CIOs and technology leaders, investing in diagnostic intelligence isn't merely a technical enhancement. It's a governance imperative, ensuring that reliability, transparency and trust scale as automation scales.&lt;/p&gt;
 &lt;p&gt;The organizations that succeed in the next decade won't simply deploy AI. They will deploy AI systems that can explain themselves before trust, compliance or safety is compromised.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Rashmi Choudhary is a data scientist specializing in large-scale AI systems for routing, navigation and operational intelligence. An IEEE senior member and inventor on multiple patents in transportation AI, she focuses on building reliable and accountable AI systems for safety-critical environments. She writes about AI governance, infrastructure intelligence and production system reliability.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>Enterprise AI detects anomalies but can't always explain why they happen. Next up for these systems is a focus on diagnosis to enhance trust, compliance and safety.</description>
            
            <link>https://www.techtarget.com/searchenterpriseai/post/Why-enterprise-AI-needs-diagnostic-intelligence</link>
            <pubDate>Wed, 25 Mar 2026 16:45:00 GMT</pubDate>
            <title>Why enterprise AI needs diagnostic intelligence</title>
        </item>
        <item>
            <body>&lt;p&gt;Amazon Bedrock -- also known as AWS Bedrock -- is a machine learning platform used to build &lt;a href="https://www.techtarget.com/searchenterpriseai/definition/generative-AI"&gt;generative artificial intelligence&lt;/a&gt; (AI) applications on the Amazon Web Services cloud computing platform. Bedrock uses &lt;a href="https://www.techtarget.com/whatis/feature/Foundation-models-explained-Everything-you-need-to-know"&gt;foundation models&lt;/a&gt; to simplify the creation of these apps and make the process more efficient.&lt;/p&gt; 
&lt;p&gt;Foundation models are adaptable AI models trained on &lt;a href="https://www.techtarget.com/searchdatamanagement/definition/big-data"&gt;large data sets&lt;/a&gt; to perform many kinds of tasks. They're versatile, reusable and don't require retraining for each new task. Bedrock replaces the physical infrastructure typically used to &lt;a href="https://www.techtarget.com/searchenterpriseai/tip/Assessing-different-types-of-generative-AI-applications"&gt;build generative AI apps&lt;/a&gt; with foundation models, simplifying the app building process.&lt;/p&gt; 
&lt;p&gt;Bedrock is a competitor to OpenAI &lt;a href="https://www.techtarget.com/whatis/definition/ChatGPT"&gt;ChatGPT&lt;/a&gt; and &lt;a href="https://www.techtarget.com/searchenterpriseai/definition/Dall-E"&gt;Dall-E&lt;/a&gt; 2. It is also compared to &lt;a href="https://www.techtarget.com/searchaws/definition/Amazon-SageMaker"&gt;Amazon SageMaker&lt;/a&gt;, which is used to build and train complex machine learning models; Bedrock is more focused on building generative AI apps.&lt;/p&gt; 
&lt;figure class="main-article-image full-col" data-img-fullsize="https://www.techtarget.com/rms/onlineimages/amazon_quicksight_generative_ai_screenshot-f.jpg"&gt;
 &lt;img data-src="https://www.techtarget.com/rms/onlineimages/amazon_quicksight_generative_ai_screenshot-f_mobile.jpg" class="lazy" data-srcset="https://www.techtarget.com/rms/onlineimages/amazon_quicksight_generative_ai_screenshot-f_mobile.jpg 960w,https://www.techtarget.com/rms/onlineimages/amazon_quicksight_generative_ai_screenshot-f.jpg 1280w" alt="Screenshot of Amazon QuickSight Q generative business intelligence (BI) dashboard built using Amazon Bedrock" data-credit="AWS" height="334" width="560"&gt;
 &lt;figcaption&gt;
  &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Amazon QuickSight Q is a generative business intelligence platform built using Amazon Bedrock, which provides access to foundation models.
 &lt;/figcaption&gt;
 &lt;div class="main-article-image-enlarge"&gt;
  &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
 &lt;/div&gt;
&lt;/figure&gt; 
&lt;section class="section main-article-chapter" data-menu-title="How Amazon Bedrock works"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;How Amazon Bedrock works&lt;/h2&gt;
 &lt;p&gt;Amazon Bedrock gives software developers access to a wide range of foundation models from AI startups, such as AI21 Labs, Anthropic, Cohere and Stability AI, through a serverless application programming interface (&lt;a href="https://www.techtarget.com/searchapparchitecture/definition/application-program-interface-API"&gt;API&lt;/a&gt;). For example, &lt;a href="https://www.techtarget.com/whatis/definition/large-language-model-LLM"&gt;large language models&lt;/a&gt; such as Anthropic's Claude 2 and open source, text-to-image models such as Stability AI's Stable Diffusion XL can be used with Bedrock to simplify the delivery of generative AI apps.&lt;/p&gt;
 &lt;p&gt;On their own, foundation models are adept at comprehending &lt;a href="https://www.techtarget.com/searchenterpriseai/definition/natural-language-processing-NLP"&gt;natural language&lt;/a&gt; inputs and processing them to produce text or images as responses or outputs. However, they can't perform complex tasks or actions without direction.&lt;/p&gt;
 &lt;p&gt;AWS &lt;a target="_blank" href="https://aws.amazon.com/bedrock/agents/" rel="noopener"&gt;released&lt;/a&gt; Agents for Amazon Bedrock to designate and automate complex tasks for a model without requiring a developer to manually write the code needed to do so. Specifically, developers can use agents to connect foundation models to their &lt;a href="https://www.techtarget.com/searchdatamanagement/tip/Open-source-vs-proprietary-database-management"&gt;proprietary data sources&lt;/a&gt; so the apps they build will produce up-to-date answers based on their own data. When a user employs a generative AI app built with Bedrock, an agent makes API calls that retrieve the data needed from proprietary sources to answer the user's requests or queries.&lt;/p&gt;
 &lt;p&gt;In addition to third-party foundation models, Amazon allows &lt;a target="_blank" href="https://aws.amazon.com/bedrock/amazon-models/titan/" rel="noopener"&gt;access to its own&lt;/a&gt; Titan foundation models, which include Titan Text to generate text and Titan Embeddings to translate textual inputs into numerical representations.&lt;/p&gt;
&lt;/section&gt;     
&lt;section class="section main-article-chapter" data-menu-title="Applications developers can make with Amazon Bedrock"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Applications developers can make with Amazon Bedrock&lt;/h2&gt;
 &lt;p&gt;Amazon Bedrock can be used to build the following apps that are useful when applied to real-world use cases and workloads:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;Text generation.&lt;/b&gt; Apps built with Amazon Bedrock can generate original written text in various forms, such as short stories, blog posts, news articles and social media posts.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Conversational AI.&lt;/b&gt; Customized chatbots or virtual assistants built using Bedrock are based on foundation models having access to proprietary data owned by a developer or software vendor. This provides good &lt;a href="https://www.techtarget.com/searchenterpriseai/definition/conversational-AI"&gt;conversational AI&lt;/a&gt; responses to users' queries.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Text summarization.&lt;/b&gt; A simple Bedrock app offers the capability to summarize text without forcing a user to pore over lengthy documents or materials.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;Image generation.&lt;/b&gt; Bedrock's API can interface with various types of foundation models, including those with &lt;a href="https://www.datasciencecentral.com/a-guide-to-how-text-to-speech-works/" target="_blank" rel="noopener"&gt;text-to-speech&lt;/a&gt; AI capabilities. Given that, an app built on Bedrock can take a request or prompt for a specific image from a user and generate that image.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;div class="extra-info"&gt;
  &lt;div class="extra-info-inner"&gt;
   &lt;h3&gt;Learn more about generative AI and how it might change enterprise processes.&lt;/h3&gt; 
   &lt;p&gt;&lt;a href="https://www.techtarget.com/searchenterpriseai/tip/Generative-AI-landscape-Potential-future-trends"&gt;Generative AI landscape: Potential future trends&lt;/a&gt;&lt;/p&gt; 
   &lt;p&gt;&lt;a href="https://www.techtarget.com/searchenterpriseai/feature/Generative-AI-in-business-Fast-uptake-earmarked-funding"&gt;Generative AI in business: Fast uptake, earmarked funding&lt;/a&gt;&lt;/p&gt; 
   &lt;p&gt;&lt;a href="https://www.techtarget.com/searchenterpriseai/feature/New-skills-in-demand-as-generative-AI-reshapes-tech-roles"&gt;New skills in demand as generative AI reshapes tech roles&lt;/a&gt;&lt;/p&gt; 
   &lt;p&gt;&lt;a href="https://www.techtarget.com/searchcustomerexperience/tip/Generative-AI-tools-to-consider-for-marketing-and-sales"&gt;Generative AI tools to consider for marketing and sales&lt;/a&gt;&lt;/p&gt; 
   &lt;p&gt;&lt;a href="https://www.techtarget.com/whatis/feature/Will-AI-replace-jobs-9-job-types-that-might-be-affected"&gt;Will AI replace jobs? Job types that might be affected&lt;/a&gt;&lt;/p&gt;
  &lt;/div&gt;
 &lt;/div&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="Is Amazon Bedrock generally available?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Is Amazon Bedrock generally available?&lt;/h2&gt;
 &lt;p&gt;Amazon Bedrock became generally available in September 2023. Two &lt;a target="_blank" href="https://aws.amazon.com/bedrock/pricing/" rel="noopener"&gt;pricing plans&lt;/a&gt; are available. One plan lets users access foundation models on a pay-as-you-go basis. The other lets users provision throughput to meet their performance requirements, but it requires a time-based term commitment.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;As businesses delve into generative AI and the myriad ways to customize generative AI applications, it's worth &lt;/i&gt;&lt;a href="https://www.techtarget.com/whatis/feature/Pros-and-cons-of-AI-generated-content"&gt;&lt;i&gt;understanding the pros and cons of AI-generated content&lt;/i&gt;&lt;/a&gt;&lt;i&gt; to be prepared for the future.&lt;/i&gt;&lt;/p&gt;
 &lt;div class="youtube-iframe-container"&gt;
  &lt;iframe id="ytplayer-0" src="https://www.youtube.com/embed/1U83yhGY_pI?si=efl2tRiWTrs2QWa7?autoplay=0&amp;amp;modestbranding=1&amp;amp;rel=0&amp;amp;widget_referrer=null&amp;amp;enablejsapi=1&amp;amp;origin=https://www.techtarget.com" type="text/html" height="360" width="640" frameborder="0"&gt;&lt;/iframe&gt;
 &lt;/div&gt;
 &lt;p&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>Amazon Bedrock -- also known as AWS Bedrock -- is a machine learning platform used to build generative artificial intelligence (AI) applications on the Amazon Web Services cloud computing platform.</description>
            
            <link>https://www.techtarget.com/searchenterpriseai/definition/Amazon-Bedrock-AWS-Bedrock</link>
            <pubDate>Tue, 17 Dec 2024 00:00:00 GMT</pubDate>
            <title>What is Amazon Bedrock (AWS Bedrock)?</title>
        </item>
        <item>
            <body>&lt;p&gt;One year after investing in Microsoft 365 backup-as-a-service provider Alcion, Veeam Software acquired the startup with roots in AI and security.&lt;/p&gt; 
&lt;p&gt;Analysts and Veeam executives said the acquisition will strengthen Veeam's budding as-a-service offerings. Earlier this year, the vendor launched Veeam Data Cloud backup as a service to protect Microsoft 365 and Azure workloads.&lt;/p&gt; 
&lt;p&gt;"Veeam, after resisting it for many years, finally went into the as-a-service business," said Christophe Bertrand, an analyst at TheCube Research.&lt;/p&gt; 
&lt;p&gt;The Veeam acquisition, which closed in mid-September, is the data protection vendor's second purchase of a company founded by Niraj Tolia and Vaibhav Kamra. In 2020, Veeam acquired their Kubernetes backup provider Kasten. In September 2023, &lt;a href="https://www.techtarget.com/searchdatabackup/news/366552363/Veeam-leads-funding-round-for-SaaS-backup-provider-Alcion"&gt;Veeam led a $21 million funding round&lt;/a&gt; for recently out-of-stealth Alcion.&lt;/p&gt; 
&lt;p&gt;The data protection market has seen a surge in acquisition activity in the last year. For example, &lt;a href="https://www.techtarget.com/searchdatabackup/news/366611857/Commvault-acquisition-of-Clumio-for-S3-speaks-volumes"&gt;Commvault acquired Clumio&lt;/a&gt; this week, &lt;a href="https://www.techtarget.com/searchdatabackup/news/366569336/Cohesity-Veritas-combine-as-new-data-protection-company"&gt;Cohesity and Veritas&lt;/a&gt; are merging, and Veeam's purchase of Cirrus from CT4 late last year eventually became the Veeam Data Cloud. Earlier this year, Veeam also acquired &lt;a href="https://www.techtarget.com/searchdatabackup/news/366581913/Veeam-acquires-Coveware-for-incident-response-capabilities"&gt;incident response vendor Coveware&lt;/a&gt;.&lt;/p&gt; 
&lt;p&gt;Veeam did not release terms of its latest acquisition.&lt;/p&gt; 
&lt;p&gt;"Veeam is not historically an often-acquiring organization, but that has clearly changed in the last few years," said Rick Vanover, Veeam's vice president of product strategy. "I don't see that behavior stopping."&lt;/p&gt; 
&lt;section class="section main-article-chapter" data-menu-title="Alcion at Veeam's service"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Alcion at Veeam's service&lt;/h2&gt;
 &lt;p&gt;The latest Veeam acquisition brings additional expertise in fast-growing areas, with as a service at the top, Vanover said.&lt;/p&gt;
 &lt;div class="imagecaption alignLeft"&gt;
  &lt;img src="https://cdn.ttgtmedia.com/rms/onlineImages/tolia_niraj.jpg" alt="Niraj Tolia, CTO, Veeam"&gt;Niraj Tolia
 &lt;/div&gt;
 &lt;p&gt;Alcion's team of fewer than 50 employees joins Veeam, including Tolia as the new CTO and Kamra as vice president of technology. Tolia will lead product strategy and engineering for the Veeam Data Cloud. He succeeds Danny Allan, who left his CTO position to take on the same role at cybersecurity provider Snyk.&lt;/p&gt;
 &lt;p&gt;A year ago, when asked if Alcion was open to an acquisition by Veeam, Tolia said the company's focus was on an independent path, growth, product enhancements and customer value. Now it's about going to the next level for Alcion and Veeam.&lt;/p&gt;
 &lt;p&gt;"For our customers, it's just a much larger stage, getting to a much larger scale, much faster," Tolia said.&lt;/p&gt;
 &lt;p&gt;Alcion claims to have hundreds of customers. Those customers will receive an offer to transition to Veeam Data Cloud, said Brandt Urban, Veeam's senior vice president of worldwide cloud sales. Veeam also offers standalone Backup for Microsoft Azure and Backup for Microsoft 365 products.&lt;/p&gt;
 &lt;p&gt;Veeam, however, has not made a final decision about the end result for the &lt;a href="https://www.techtarget.com/searchdatabackup/news/366538893/Alcion-applies-AI-security-focus-to-Microsoft-365-backup"&gt;Alcion product&lt;/a&gt; and does not have a timetable for the integration yet.&lt;/p&gt;
 &lt;p&gt;"Having us infuse this fastest-growing product in Veeam's history with the talent and the thought leadership that's coming in with Niraj, and the development capabilities across the Alcion team, is going to let us enhance the product faster, add more capabilities and then add more workloads," Urban said of Veeam Data Cloud.&lt;/p&gt;
 &lt;p&gt;Bertrand said he is hoping to see Veeam cover additional SaaS workloads with its data protection, pointing to other collaboration and DevOps platforms as examples. In addition to Microsoft 365 protection, Veeam also offers Backup for Salesforce.&lt;/p&gt;
 &lt;p&gt;&lt;b&gt;"&lt;/b&gt;I look at the strategy that &lt;a href="https://www.techtarget.com/searchdatabackup/news/365530034/HYCU-R-Cloud-expands-data-protection-for-SaaS"&gt;HYCU has adopted&lt;/a&gt;, which is different. They cover many, many platforms through a different mechanism than Veeam does," Bertrand said, referring to HYCU's R-Cloud. "It feels a little bit like an arms race has started. In the next few months, we'll know exactly which ones will be the most successful."&lt;/p&gt;
&lt;/section&gt;           
&lt;section class="section main-article-chapter" data-menu-title="AI and security and AI, oh my!"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;AI and security and AI, oh my!&lt;/h2&gt;
 &lt;p&gt;The AI in Alcion's product helps administrators perform intelligent backups and detect malware. Krista Case, an analyst at The Futurum Group, said she sees Alcion using AI strategically, for example to&amp;nbsp;adapt &lt;a target="_blank" href="https://docs.alcion.ai/concepts/backup-and-restore/#backup" rel="noopener"&gt;backup schedules&lt;/a&gt; based on data modification patterns, to trigger a backup based on identification of potentially malicious activity, and to suggest the best recovery point.&lt;/p&gt;
 &lt;p&gt;"When we talk with practitioners&amp;nbsp;about&amp;nbsp;cyber resilience, they are most concerned about minimizing data loss and downtime -- and these Alcion capabilities directly address that requirement," Case said.&lt;/p&gt;
 &lt;blockquote class="main-article-pullquote"&gt;
  &lt;div class="main-article-pullquote-inner"&gt;
   &lt;figure&gt;
    AI is actually a positive as long as it is not just hype.
   &lt;/figure&gt;
   &lt;figcaption&gt;
    &lt;strong&gt;Christophe Bertrand&lt;/strong&gt;Analyst, TheCube Research 
   &lt;/figcaption&gt;
   &lt;i class="icon" data-icon="z"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/blockquote&gt;
 &lt;p&gt;End users and partners want vendors like Veeam to use AI in a smart and governed way, for example to help fight ransomware, Bertrand said. Veeam's AI capabilities include inline malware detection, its Intelligent Diagnostics service and a forthcoming &lt;a href="https://www.techtarget.com/searchdatabackup/news/366587408/Veeam-backup-and-security-updates-include-cloud-vault-copilot"&gt;Copilot for its Backup for Microsoft 365&lt;/a&gt;.&lt;/p&gt;
 &lt;p&gt;"AI is actually a positive as long as it is not just hype," Bertrand said.&lt;/p&gt;
 &lt;p&gt;And there is a lot of hype, with seemingly every tech vendor touting AI-focused features and products.&lt;/p&gt;
 &lt;p&gt;"The focus [for users] is less on having a chatbot, and it is more about the outcome that AI can enable," Case said. "This might include detecting attacks that otherwise would have fallen through the cracks, for example."&lt;/p&gt;
 &lt;p&gt;Veeam executives noted the importance of clear AI benefits for users.&lt;/p&gt;
 &lt;p&gt;"We keep that top of mind because otherwise it's a really expensive experiment," Vanover said.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;Paul Crocetti is an executive editor at TechTarget Editorial. Since 2015, he has worked on TechTarget's Storage, Data Backup and Disaster Recovery sites.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>Veeam embarks on the acquisition path again with its purchase of Alcion. The deal brings several employees who specialize in AI and as-a-service knowledge.</description>
            
            <link>https://www.techtarget.com/searchdatabackup/news/366612117/Veeam-acquisition-of-Alcion-supports-push-into-as-a-service-AI</link>
            <pubDate>Fri, 27 Sep 2024 10:11:00 GMT</pubDate>
            <title>Veeam acquisition of Alcion supports push into as-a-service, AI</title>
        </item>
        <item>
            <body>&lt;section class="section main-article-chapter" data-menu-title="What is BERT?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is BERT?&lt;/h2&gt;
 &lt;p&gt;BERT language model is an open source machine learning framework for natural language processing (&lt;a href="https://www.techtarget.com/searchenterpriseai/definition/natural-language-processing-NLP"&gt;NLP&lt;/a&gt;). BERT is designed to help computers understand the meaning of ambiguous language in text by using surrounding text to establish context. The BERT framework was pretrained using text from Wikipedia and can be fine-tuned with question-and-answer data sets.&lt;/p&gt;
 &lt;p&gt;BERT, which stands for Bidirectional Encoder Representations from Transformers, is based on &lt;a href="https://www.techtarget.com/searchenterpriseai/definition/transformer-model"&gt;transformers&lt;/a&gt;, a &lt;a href="https://www.techtarget.com/searchenterpriseai/definition/deep-learning-deep-neural-network"&gt;deep learning&lt;/a&gt; model in which every output element is connected to every input element, and the weightings between them are dynamically calculated based upon their connection.&lt;/p&gt;
 &lt;p&gt;Historically, language models could only read input text sequentially -- either left-to-right or right-to-left -- but couldn't do both at the same time. BERT is different because it's designed to read in both directions at once. The introduction of transformer models enabled this capability, which is known as bidirectionality. Using bidirectionality, BERT is pretrained on two different but related NLP tasks: masked language modeling (&lt;a href="https://www.techtarget.com/searchenterpriseai/definition/masked-language-models-MLMs"&gt;MLM&lt;/a&gt;) and next sentence prediction (NSP).&lt;/p&gt;
 &lt;p&gt;The objective of MLM training is to hide a word in a sentence and then have the program predict what word has been hidden based on the hidden word's context. The objective of NSP training is to have the program predict whether two given sentences have a logical, sequential connection or whether their relationship is simply random.&lt;/p&gt;
 &lt;div class="youtube-iframe-container"&gt;
  &lt;iframe id="ytplayer-0" src="https://www.youtube.com/embed/tUT9PD5ttMo?si=ummGM_MO4rnGzGr4?autoplay=0&amp;amp;modestbranding=1&amp;amp;rel=0&amp;amp;widget_referrer=null&amp;amp;enablejsapi=1&amp;amp;origin=https://www.techtarget.com" type="text/html" height="360" width="640" frameborder="0"&gt;&lt;/iframe&gt;
 &lt;/div&gt;
&lt;/section&gt;      
&lt;section class="section main-article-chapter" data-menu-title="Background and history of BERT"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Background and history of BERT&lt;/h2&gt;
 &lt;p&gt;Google first introduced the transformer model in 2017. At that time, language models primarily used recurrent neural networks (&lt;a href="https://www.techtarget.com/searchenterpriseai/definition/recurrent-neural-networks"&gt;RNN&lt;/a&gt;) and convolutional neural networks (&lt;a href="https://www.techtarget.com/searchenterpriseai/definition/convolutional-neural-network"&gt;CNN&lt;/a&gt;) to handle NLP tasks.&lt;/p&gt;
 &lt;p&gt;&lt;a href="https://www.techtarget.com/searchenterpriseai/feature/CNN-vs-RNN-How-they-differ-and-where-they-overlap"&gt;CNNs and RNNs&lt;/a&gt; are competent models, however, they require sequences of data to be processed in a fixed order. Transformer models are considered a significant improvement because they don't require data sequences to be processed in any fixed order.&lt;/p&gt;
 &lt;p&gt;Because transformers can process data in any order, they enable training on larger amounts of data than was possible before their existence. This facilitated the creation of pretrained models like BERT, which was trained on massive amounts of language data prior to its release.&lt;/p&gt;
 &lt;p&gt;In 2018, Google introduced and open sourced BERT. In its research stages, the framework achieved state-of-the-art results in 11 natural language understanding (&lt;a href="https://www.techtarget.com/searchenterpriseai/definition/natural-language-understanding-NLU"&gt;NLU&lt;/a&gt;) tasks, including &lt;a href="https://www.techtarget.com/searchbusinessanalytics/definition/opinion-mining-sentiment-mining"&gt;sentiment analysis&lt;/a&gt;, semantic role labeling, text classification and the &lt;a href="https://www.techtarget.com/searchdatamanagement/definition/disambiguation"&gt;disambiguation&lt;/a&gt; of words with multiple meanings. Researchers at Google AI Language published a report that same year explaining these results.&lt;/p&gt;
 &lt;p&gt;Completing these tasks distinguished BERT from previous language models, such as word2vec and GloVe. Those models were limited when interpreting context and polysemous words, or words with multiple meanings. BERT effectively addresses ambiguity, which is the greatest challenge to NLU, according to research scientists in the field. It's capable of parsing language with a relatively human-like common sense.&lt;/p&gt;
 &lt;p&gt;In October 2019, Google announced that it would begin applying BERT to its U.S.-based production &lt;a href="https://www.techtarget.com/whatis/feature/Google-algorithms-explained-Everything-you-need-to-know"&gt;search algorithms&lt;/a&gt;.&lt;/p&gt;
 &lt;p&gt;It is &lt;a target="_blank" href="https://blog.google/products/search/search-language-understanding-bert/" rel="noopener"&gt;estimated&lt;/a&gt; that BERT enhances Google's understanding of approximately 10% of U.S.-based English language Google search queries. Google recommends that organizations not try to optimize content for BERT, as BERT aims to provide a natural-feeling search experience. Users are advised to keep queries and content focused on the natural subject matter and natural user experience.&lt;/p&gt;
 &lt;p&gt;BY December 2019, BERT had been applied to more than 70 different languages. The model has had a large impact on voice search as well as text-based search, which prior to 2018 had been error-prone with Google's NLP techniques. Once BERT was applied to many languages, it improved search engine optimization; its proficiency in understanding context helps it interpret patterns that different languages share without having to completely understand the language.&lt;/p&gt;
 &lt;p&gt;BERT went on to influence many &lt;a href="https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence"&gt;artificial intelligence&lt;/a&gt; systems. Various lighter versions of BERT and similar training methods have been applied to models from GPT-2 to &lt;a href="https://www.techtarget.com/whatis/definition/ChatGPT"&gt;ChatGPT&lt;/a&gt;.&lt;/p&gt;
&lt;/section&gt;          
&lt;section class="section main-article-chapter" data-menu-title="How BERT works"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;How BERT works&lt;/h2&gt;
 &lt;p&gt;The goal of any given NLP technique is to understand human language as it is spoken naturally. In BERT's case, this means predicting a word in a blank. To do this, models typically train using a large repository of specialized, labeled training data. This process involves linguists doing laborious manual &lt;a href="https://www.techtarget.com/whatis/definition/data-labeling"&gt;data labeling&lt;/a&gt;.&lt;/p&gt;
 &lt;p&gt;BERT, however, was pretrained using only a collection of unlabeled, plain text, namely the entirety of English Wikipedia and the Brown Corpus. It continues to learn through &lt;a href="https://www.techtarget.com/searchenterpriseai/definition/unsupervised-learning"&gt;unsupervised learning&lt;/a&gt; from unlabeled text and improves even as it's being used in practical applications such as Google search.&lt;/p&gt;
 &lt;p&gt;BERT's pretraining serves as a base layer of knowledge from which it can build its responses. From there, BERT can adapt to the ever-growing body of searchable content and queries, and it can be fine-tuned to a user's specifications. This process is known as &lt;a href="https://www.techtarget.com/searchcio/definition/transfer-learning"&gt;transfer learning&lt;/a&gt;. Aside from this pretraining process, BERT has multiple other aspects it relies on to function as intended, including the following:&lt;/p&gt;
 &lt;h3&gt;Transformers&lt;/h3&gt;
 &lt;p&gt;Google's work on transformers made BERT possible. The transformer is the part of the model that gives BERT its increased capacity for understanding context and ambiguity in language. The transformer processes any given word in relation to all other words in a sentence, rather than processing them one at a time. By looking at all surrounding words, the transformer enables BERT to understand the full context of the word and therefore better understand searcher intent.&lt;/p&gt;
 &lt;p&gt;This is contrasted against the traditional method of language processing, known as word embedding. This approach was used in models such as GloVe and word2vec. It would map every single word to a &lt;a href="https://www.techtarget.com/whatis/definition/vector"&gt;vector&lt;/a&gt;, which represented only one dimension of that word's meaning.&lt;/p&gt;
 &lt;h3&gt;Masked language modeling&lt;/h3&gt;
 &lt;p&gt;Word embedding models require large data sets of &lt;a href="https://www.techtarget.com/whatis/definition/structured-data"&gt;structured data&lt;/a&gt;. While they are adept at many general NLP tasks, they fail at the context-heavy, predictive nature of question answering because all words are in some sense fixed to a vector or meaning.&lt;/p&gt;
 &lt;p&gt;BERT uses an MLM method to keep the word in focus from seeing itself, or having a fixed meaning independent of its context. BERT is forced to identify the masked word based on context alone. In BERT, words are defined by their surroundings, not by a prefixed identity.&lt;/p&gt;
 &lt;h3&gt;Self-attention mechanisms&lt;/h3&gt;
 &lt;p&gt;BERT also relies on a self-attention mechanism that captures and understands relationships among words in a sentence. The bidirectional transformers at the center of BERT's design make this possible. This is significant because often, a word may change meaning as a sentence develops. Each word added augments the overall meaning of the word the NLP algorithm is focusing on. The more words that are present in each sentence or phrase, the more ambiguous the word in focus becomes. BERT accounts for the augmented meaning by reading bidirectionally, accounting for the effect of all other words in a sentence on the focus word and eliminating the left-to-right momentum that biases words towards a certain meaning as a sentence progresses.&lt;/p&gt;
 &lt;figure class="main-article-image half-col" data-img-fullsize="https://www.techtarget.com/rms/onlineImages/whatis-bert-h.png"&gt;
  &lt;img data-src="https://www.techtarget.com/rms/onlineImages/whatis-bert-h_half_column_mobile.png" class="lazy" data-srcset="https://www.techtarget.com/rms/onlineImages/whatis-bert-h_half_column_mobile.png 960w,https://www.techtarget.com/rms/onlineImages/whatis-bert-h.png 1280w" alt="Diagram showing how BERT identifies meanings of words" height="387" width="279"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;BERT examines individual words in context to determine the meaning of ambiguous language.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
 &lt;p&gt;&lt;/p&gt;
 &lt;p&gt;For example, in the image above, BERT is determining which prior word in the sentence the word "it" refers to, and then using the self-attention mechanism to weigh the options. The word with the highest calculated score is deemed the correct association. In this example, "it" refers to "animal", not "street". If this phrase was a search &lt;a href="https://www.techtarget.com/searchdatamanagement/definition/query"&gt;query&lt;/a&gt;, the results would reflect this subtler, more precise understanding BERT reached.&lt;/p&gt;
 &lt;h3&gt;Next sentence prediction&lt;/h3&gt;
 &lt;p&gt;NSP is a training technique that teaches BERT to predict whether a certain sentence follows a previous sentence to test its knowledge of relationships between sentences. Specifically, BERT is given both sentence pairs that are correctly paired and pairs that are wrongly paired so it gets better at understanding the difference. Over time, BERT gets better at predicting next sentences accurately. Typically, both NSP and MLM techniques are used simultaneously.&lt;/p&gt;
&lt;/section&gt;                 
&lt;section class="section main-article-chapter" data-menu-title="What is BERT used for?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is BERT used for?&lt;/h2&gt;
 &lt;p&gt;Google uses BERT to optimize the interpretation of user search queries. BERT excels at functions that make this possible, including the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;Sequence-to-sequence language generation tasks such as:&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li style="list-style-type: none;"&gt; 
   &lt;ul style="list-style-type: circle;" class="default-list"&gt; 
    &lt;li&gt;Question answering.&lt;/li&gt; 
    &lt;li&gt;Abstract summarization.&lt;/li&gt; 
    &lt;li&gt;Sentence prediction.&lt;/li&gt; 
    &lt;li&gt;Conversational response generation.&lt;/li&gt; 
   &lt;/ul&gt; &lt;/li&gt; 
 &lt;/ul&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;NLU tasks such as:&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li style="list-style-type: none;"&gt; 
   &lt;ul style="list-style-type: circle;" class="default-list"&gt; 
    &lt;li&gt;Polysemy and coreference resolution. Coreference means words that sound or look the same but have different meanings.&lt;/li&gt; 
    &lt;li&gt;Word sense disambiguation.&lt;/li&gt; 
    &lt;li&gt;Natural language inference.&lt;/li&gt; 
    &lt;li&gt;Sentiment classification.&lt;/li&gt; 
   &lt;/ul&gt; &lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;BERT is open source, meaning anyone can use it. Google claims that users can train a state-of-the-art question-and-answer system in just 30 minutes on a cloud tensor processing unit, and in a few hours using a &lt;a href="https://www.techtarget.com/searchvirtualdesktop/definition/GPU-graphics-processing-unit"&gt;graphic processing unit&lt;/a&gt;. Many other organizations, research groups and separate factions of Google are fine-tuning the model's architecture with supervised training to either optimize it for efficiency or specialize it for specific tasks by pretraining BERT with certain contextual representations. Examples include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;b&gt;PatentBERT.&lt;/b&gt; This BERT model is fine-tuned to perform patent classification tasks.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;DocBERT.&lt;/b&gt; This model is fine-tuned for document classification tasks.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;BioBERT.&lt;/b&gt; This biomedical language representation model is for biomedical &lt;a href="https://www.techtarget.com/searchbusinessanalytics/definition/text-mining"&gt;text mining&lt;/a&gt;.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;VideoBERT.&lt;/b&gt; This joint visual-linguistic model is used in unsupervised learning of unlabeled data on YouTube.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;SciBERT.&lt;/b&gt; This model is for scientific text.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;G-BERT.&lt;/b&gt; This pretrained BERT model uses medical codes with hierarchical representations through &lt;a href="https://www.techtarget.com/searchenterpriseai/definition/graph-neural-networks-GNNs"&gt;graph neural networks&lt;/a&gt; and then fine-tuned for making medical recommendations.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;TinyBERT by Huawei.&lt;/b&gt; This smaller, "student" BERT learns from the original "teacher" BERT, performing transformer distillation to improve efficiency. TinyBERT produced promising results in comparison to BERT-base while being 7.5 times smaller and 9.4 times faster at inference.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;DistilBERT by Hugging Face.&lt;/b&gt; This smaller, faster and cheaper version of BERT is trained from BERT, then certain architectural aspects are removed to improve efficiency.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;ALBERT.&lt;/b&gt; This lighter version of BERT lowers memory consumption and increases the speed with which the model is trained.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;SpanBERT.&lt;/b&gt; This model improved BERT's ability to predict spans of text.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;RoBERTa. &lt;/b&gt;Through more advanced training methods, this model was trained on a bigger data set for a longer time to improve performance.&lt;/li&gt; 
  &lt;li&gt;&lt;b&gt;ELECTRA. &lt;/b&gt;This version has been tailored to generate high-quality representations of text.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;figure class="main-article-image full-col" data-img-fullsize="https://www.techtarget.com/rms/onlineimages/the_trend_toward_smaller_language_models-f.png"&gt;
  &lt;img data-src="https://www.techtarget.com/rms/onlineimages/the_trend_toward_smaller_language_models-f_mobile.png" class="lazy" data-srcset="https://www.techtarget.com/rms/onlineimages/the_trend_toward_smaller_language_models-f_mobile.png 960w,https://www.techtarget.com/rms/onlineimages/the_trend_toward_smaller_language_models-f.png 1280w" alt="Quotes from AI experts on the rise of smaller language models." height="314" width="560"&gt;
  &lt;figcaption&gt;
   &lt;i class="icon pictures" data-icon="z"&gt;&lt;/i&gt;Smaller language models, like the more optimized versions of BERT, are becoming more commonplace.
  &lt;/figcaption&gt;
  &lt;div class="main-article-image-enlarge"&gt;
   &lt;i class="icon" data-icon="w"&gt;&lt;/i&gt;
  &lt;/div&gt;
 &lt;/figure&gt;
&lt;/section&gt;         
&lt;section class="section main-article-chapter" data-menu-title="BERT vs. generative pre-trained transformers (GPT)"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;BERT vs. generative pre-trained transformers (GPT)&lt;/h2&gt;
 &lt;p&gt;While BERT and GPT models are among the &lt;a href="https://www.techtarget.com/whatis/feature/12-of-the-best-large-language-models"&gt;best language models&lt;/a&gt;, they exist for different reasons. The initial &lt;a href="https://www.techtarget.com/searchenterpriseai/definition/GPT-3"&gt;GPT-3 model&lt;/a&gt;, along with OpenAI's subsequent more advanced GPT models, are also language models trained on massive data sets. While they share this in common with BERT, BERT differs in multiple ways.&lt;/p&gt;
 &lt;h3&gt;BERT&lt;/h3&gt;
 &lt;p&gt;Google developed BERT to serve as a bidirectional transformer model that examines words within text by considering both left-to-right and right-to-left contexts. It helps computer systems understand text as opposed to creating text, which GPT models are made to do. BERT excels at NLU tasks as well as performing sentiment analysis. It's ideal for Google searches and customer feedback.&lt;/p&gt;
 &lt;h3&gt;GPT&lt;/h3&gt;
 &lt;p&gt;GPT models differ from BERT in both their objectives and their use cases. GPT models are forms of generative AI that generate original text and other forms of content. They're also well-suited for summarizing long pieces of text and text that's hard to interpret.&lt;/p&gt;
 &lt;p&gt;&lt;i&gt;BERT and other language models differ not only in scope and applications but also in architecture. Learn more about &lt;/i&gt;&lt;a href="https://www.techtarget.com/searchenterpriseai/feature/Exploring-GPT-3-architecture"&gt;&lt;i&gt;GPT-3's architecture&lt;/i&gt;&lt;/a&gt;&lt;i&gt; and how it's different from BERT.&lt;/i&gt;&lt;/p&gt;
&lt;/section&gt;</body>
            <description>BERT language model is an open source machine learning framework for natural language processing (NLP).</description>
            
            <link>https://www.techtarget.com/searchenterpriseai/definition/BERT-language-model</link>
            <pubDate>Thu, 15 Feb 2024 18:14:00 GMT</pubDate>
            <title>BERT language model</title>
        </item>
        <item>
            <body>&lt;section class="section main-article-chapter" data-menu-title="What is cognitive search?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;What is cognitive search?&lt;/h2&gt;
 &lt;p&gt;Cognitive search represents a new generation of enterprise search that uses artificial intelligence (&lt;a href="https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence"&gt;AI&lt;/a&gt;) technologies to improve users' search queries and extract relevant information from multiple diverse data sets. Cognitive search capabilities extend beyond those of a classic search engine to bring numerous data sources together while also providing &lt;a href="https://www.techtarget.com/searchcontentmanagement/tip/AI-in-content-management-supports-tagging-search"&gt;automated tagging&lt;/a&gt; and personalization. It has the potential to greatly improve how an organization's employees discover and access information that's relevant and necessary to their work context.&lt;/p&gt;
 &lt;p&gt;Cognitive search differs from previously available search products because it combines indexing technology with powerful AI technologies -- such as natural language processing (&lt;a href="https://www.techtarget.com/searchbusinessanalytics/definition/natural-language-processing-NLP"&gt;NLP&lt;/a&gt;) capabilities and &lt;a href="https://www.techtarget.com/whatis/definition/algorithm"&gt;algorithms&lt;/a&gt; -- to scale a variety of data sources and types. Additionally, developers can build search applications that can be embedded into business process applications, such as pharmaceutical research tools and customer portals.&lt;/p&gt;
 &lt;p&gt;The primary benefits that organizations can reap from cognitive search include its impact on &lt;a href="https://www.techtarget.com/searchenterpriseai/feature/KDD-in-data-mining-assists-data-prep-for-machine-learning"&gt;knowledge discovery&lt;/a&gt; -- a user's ability to extract useful information from data. For example, cognitive search improves the relevance of extracted information and increases the efficiency of query responses, allowing employees to boost their productivity and provide better service.&lt;/p&gt;
&lt;/section&gt;    
&lt;section class="section main-article-chapter" data-menu-title="Importance and benefits of cognitive search"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Importance and benefits of cognitive search&lt;/h2&gt;
 &lt;p&gt;Keyword-based search and traditional enterprise search have become inadequate due to the increasing variety and amount of data used within organizations. The two methodologies impair search processes and employee productivity by returning irrelevant or incomplete results that users must sort through to find the information they need.&lt;/p&gt;
 &lt;p&gt;With cognitive search, the AI technologies that are introduced enable enterprise search to extract advanced meaning from content as well as learn from users' searches to provide increasingly relevant and complete results. Some overall benefits of cognitive search include:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;strong&gt;Maximized productivity.&lt;/strong&gt; A single search functionality removes the necessity of switching between apps and eliminates time wasted on tasks like re-entering credentials multiple times. Furthermore, the unification of data tools allows organizations to streamline their business processes.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Improved employee experience and engagement. &lt;/strong&gt;Employee loyalty is promoted through the elimination of wasted time and the increase in productivity. Machine learning (&lt;a href="https://www.techtarget.com/searchenterpriseai/definition/machine-learning-ML"&gt;ML&lt;/a&gt;) algorithms that provide personalized suggestions help users find relevant data more quickly and the flexibility of cognitive search creates an improved user experience through personalization. Since an employee's search experience is improved, they're more likely to use the tools consistently.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Lower operational costs. &lt;/strong&gt;Maximized productivity decreases an organization's operational costs since less time and resources are needed for gathering information and knowledge discovery. This is especially beneficial to industries such as healthcare and legal services that work with massive amounts of data.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;As companies grow and acquire new customers, the need to run and analyze large amounts of data increases as well. If a company is bringing in thousands of new customers every day, then their data growth is exponential, making it almost impossible to keep up with the new information. Cognitive search makes it feasible to decipher a consistently growing collection of data for use within different departments of the company.&lt;/p&gt;
&lt;/section&gt;     
&lt;section class="section main-article-chapter" data-menu-title="How does cognitive search work?"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;How does cognitive search work?&lt;/h2&gt;
 &lt;p&gt;The design elements used in enterprise search form the foundation of cognitive search. This means that organizations do not need to entirely rebuild their information technology (IT) department when implementing cognitive search. AI technologies then build on top of this foundation to find relevant information across all available enterprise data sources.&lt;/p&gt;
 &lt;p&gt;NLP is used to understand what unstructured data from emails, documents, market research, videos and recordings mean. ML algorithms continuously improve the relevancy of results. Some of the most common ML algorithms found in cognitive search include:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;&lt;strong&gt;Clustering.&lt;/strong&gt; This is an &lt;a href="https://www.techtarget.com/whatis/definition/unsupervised-learning"&gt;unsupervised learning&lt;/a&gt; algorithm that groups subsets of data based on similarities. Clustering is employed when users do not want to run a search through the entire search index. Its goal is to limit searches to specific groups of documents in each cluster.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Classification. &lt;/strong&gt;This is a &lt;a href="https://www.techtarget.com/searchenterpriseai/definition/supervised-learning"&gt;supervised learning&lt;/a&gt; algorithm that creates a model to predict labels for new data using a training set comprised of pre-labeled data.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Regression. &lt;/strong&gt;This is another supervised learning algorithm that uses the relationship between input and output variables to predict continuous numeric values from data.&lt;/li&gt; 
  &lt;li&gt;&lt;strong&gt;Recommendation.&lt;/strong&gt; This often combines various basic algorithms to generate a recommendation engine that offers potentially helpful content to users. Also called content-based recommendation, it offers personalized recommendations based on the relationship between a user's interest and the description and attributes of a document.&lt;/li&gt; 
 &lt;/ul&gt;
 &lt;p&gt;In addition to these ML algorithms, a heavy computing process, referred to as &lt;em&gt;similarity&lt;/em&gt;, builds a matrix that synthesizes the interactions between data samples.&lt;/p&gt;
&lt;/section&gt;     
&lt;section class="section main-article-chapter" data-menu-title="Cognitive search tools"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Cognitive search tools&lt;/h2&gt;
 &lt;p&gt;Cognitive search is still in its infancy, but multiple companies have jumped on the opportunity to create and market cognitive search tools. They include the following:&lt;/p&gt;
 &lt;ul class="default-list"&gt; 
  &lt;li&gt;IBM with Watson Explorer.&lt;/li&gt; 
  &lt;li&gt;Coveo.&lt;/li&gt; 
  &lt;li&gt;Attivio with their Cognitive Search and Insight Platform.&lt;/li&gt; 
  &lt;li&gt;Lucidworks.&lt;/li&gt; 
  &lt;li&gt;Mindbreeze.&lt;/li&gt; 
  &lt;li&gt;Sinequa with their Insight Platform.&lt;/li&gt; 
  &lt;li&gt;Microsoft Azure Cognitive Search.&lt;/li&gt; 
  &lt;li&gt;Algolia.&lt;/li&gt; 
 &lt;/ul&gt;
&lt;/section&gt;   
&lt;section class="section main-article-chapter" data-menu-title="Examples of cognitive search"&gt;
 &lt;h2 class="section-title"&gt;&lt;i class="icon" data-icon="1"&gt;&lt;/i&gt;Examples of cognitive search&lt;/h2&gt;
 &lt;p&gt;Legal practices with international exposure are finding cognitive search useful by implementing legal industry-specific add-ons which help find experts in specific areas of law. These experts can then be organized into specialized teams across a firm's international offices.&lt;/p&gt;
 &lt;p&gt;Cognitive search is also finding beneficial applications within customer service. Representatives can access multiple applications and widespread digital content simultaneously, including everything from shipping information to product details. This allows them to respond to customer requests and resolve problems more efficiently.&lt;/p&gt;
 &lt;p&gt;Within IT operations, cognitive systems can consistently monitor &lt;a href="https://www.techtarget.com/whatis/definition/log-log-file"&gt;log files&lt;/a&gt; that indicate faulty builds or misuse of the network. &lt;a href="https://www.techtarget.com/whatis/definition/telemetry"&gt;Telemetry&lt;/a&gt; data can also be scanned to find irregular activity that could warn of a potential outage.&lt;/p&gt;
&lt;/section&gt;</body>
            <description>Cognitive search represents a new generation of enterprise search that uses artificial intelligence (AI) technologies to improve users' search queries and extract relevant information from multiple diverse data sets.</description>
            
            <link>https://www.techtarget.com/searchenterpriseai/definition/cognitive-search</link>
            <pubDate>Thu, 11 May 2023 13:28:00 GMT</pubDate>
            <title>cognitive search</title>
        </item>
        <title>Search Enterprise AI Resources and Information from TechTarget</title>
        <ttl>60</ttl>
        <webMaster>webmaster@techtarget.com</webMaster>
    </channel>
</rss>
