Fact-checked by Grok 2 weeks ago

Azure Cognitive Search

Azure Cognitive Search is the former name of a Microsoft cloud-based search service, now known as Azure AI Search, that enables developers to index, enrich, and query large volumes of heterogeneous content—including text, images, and vectors—for building scalable search experiences in applications, websites, and AI agents. Launched in 2015 as Azure Search, it evolved to incorporate AI capabilities, reflecting its growing role in information retrieval for both traditional full-text search and modern retrieval-augmented generation (RAG) scenarios. The service underwent significant rebranding to align with advancements in . In October 2019, it was renamed Azure to emphasize the integration of optional for -driven data enrichment during indexing, such as text analysis, image processing, and entity recognition. By November 2023, it was further renamed Azure to better position it within Microsoft's broader Azure ecosystem, enhancing support for and search while maintaining with existing implementations. This evolution underscores its transition from a basic to a foundational component for enterprise applications, powering agentic retrieval in conversational and knowledge mining workflows. Key features of Azure AI Search (formerly Azure Cognitive Search) include scalable indexing pipelines with built-in AI enrichment via customizable skillsets, advanced query options like semantic ranking, vector similarity search, and hybrid combinations of keyword and AI-based retrieval. It integrates natively with other services, such as Azure AI Foundry for model deployment, Azure OpenAI for generative capabilities, and data sources like Azure SQL Database, Blob Storage, and , ensuring secure, enterprise-grade access controls through and private endpoints. As of 2025, recent updates have expanded multimodal support for processing images and text together, along with improved performance for high-scale vector operations, making it suitable for diverse use cases from search to internal knowledge bases.

History

Azure Search was introduced in public preview on August 21, 2014, as a fully managed cloud-based search service designed to provide capabilities for applications hosted on the platform. This launch addressed the need for developers to integrate scalable search functionality without the overhead of managing search infrastructure, offering a free tier supporting up to 10,000 documents and three indexes, alongside a standard tier for larger-scale deployments with tens of millions of documents. Initial indexing was supported through a push that allowed data ingestion from sources such as SQL Database, DocumentDB, Blob Storage, Table Storage, on-premises systems, or even non-Azure cloud environments, with batch uploads limited to 1,000 documents per request. The service achieved general availability on March 23, 2015, marking its readiness for production use with an enterprise-grade (SLA). At this milestone, Search introduced dedicated indexers to automate data loading from SQL Database, DocumentDB, and SQL Server running on Virtual Machines, streamlining the process of populating search indexes without manual calls. A .NET SDK was made available via in , facilitating integration into .NET applications. From its inception, querying in Search emphasized OData-based expressions for filtering, ordering, selecting, and paginating results via simple HTTP GET requests, enabling structured data manipulation in search responses. Basic integration with Lucene query syntax provided support for advanced features, including keyword matching, phrase searches, and relevance ranking based on term frequency and document statistics. These capabilities catered primarily to scenarios, such as powering search experiences in web and mobile applications for e-commerce platforms like Autotrader.ca, which served over 5 million monthly users, or omnichannel retail solutions like XOMNI for retailers including , all while abstracting away infrastructure management.

Evolution to Cognitive Capabilities

In May 2018, Microsoft announced the public preview of Cognitive Search, an enhancement to Azure Search that integrated Azure Cognitive Services to enable automatic content extraction and enrichment during indexing. This feature allowed developers to apply AI models directly in the search pipeline, extracting insights from unstructured data such as text in images via optical character recognition (OCR), named entity recognition for identifying people, organizations, and locations, key phrase extraction, language detection, and image analysis for tagging and describing visual content. By combining these capabilities, Cognitive Search transformed raw documents—like PDFs, scanned images, and multimedia files—into searchable knowledge graphs, supporting use cases such as intelligent document processing and enhanced discovery in enterprise applications. A key innovation introduced with this preview was the concept of skillsets, which provided a framework for chaining multiple models in a modular during . Skillsets enabled sequential , where outputs from one —such as text extracted via OCR—could feed into subsequent skills like text analytics for or , or image analysis for and captioning. This composable approach allowed customization through built-in skills from Azure Cognitive Services or custom extensions via webhooks, such as Azure Functions, facilitating scalable AI enrichment without requiring separate preprocessing steps. To underscore its growing emphasis on AI-driven search, officially renamed the service from Search to Cognitive Search in October 2019. The rebranding highlighted the optional yet integral role of and processing in core operations, including indexing and querying, positioning the service as a leader in intelligent . This change aligned with broader advancements, making it easier for developers to leverage unified tools for more insightful search experiences. A significant milestone followed in 2020 with the preview of capabilities, introduced via version 2020-06-30-Preview, which extended relevance ranking beyond traditional keyword matching. utilized advanced models to understand query intent, enabling concept-based matching, recognition, and reranking of results for greater accuracy in diverse scenarios like or knowledge bases. This feature marked a shift toward more human-like search, improving without manual query tuning.

Rebranding and Recent Developments

In November 2023, Azure Cognitive Search was rebranded to Azure AI Search to better align with Microsoft's broader Azure services portfolio and customer expectations for AI-centric capabilities, while ensuring no breaking changes to existing deployments, configurations, or integrations. A key enhancement in 2023 was the introduction of vector search, which enables indexing and querying based on embedding-based similarity matching to support advanced retrieval-augmented generation (RAG) patterns in generative applications. Between 2024 and 2025, Azure AI Search saw significant updates, including the public preview of query rewrite in November 2024—initially available in select regions like North Europe and Southeast Asia, with broader rollout expected by 2025—to improve search relevance by automatically reformulating user queries. Multimodal search support was also added, allowing ingestion and retrieval of content combining text and images through built-in extraction, normalization, and embedding processes. Deeper integration with Azure OpenAI further advanced RAG workflows, enabling seamless use of enterprise data in models like GPT-4 without custom training. At Ignite 2024, announcements emphasized agentic search capabilities for AI agents, including a generative query engine optimized for performance and enhanced scalability for AI workloads through increased vector and storage capacities. These developments position Azure AI Search as a foundational service for building autonomous, scalable AI applications.

Overview

Core Functionality

Azure Cognitive Search, now known as Azure AI Search, is a fully managed cloud search service provided by that enables developers to build rich search experiences over private and public data sources. It serves as a scalable search infrastructure, handling the core tasks of indexing diverse content types and facilitating retrieval through , applications, and AI agents. The service is designed to support enterprise-scale search scenarios, including and relevance tuning, without requiring users to manage the underlying infrastructure. The operational workflow of Cognitive Search begins with data ingestion, where content is loaded into a searchable index. This process involves pushing documents directly via or using automated indexers to pull data from supported Azure sources, such as Blob Storage or , transforming it into inverted indexes for text or vector indexes for embeddings. Once indexed, queries are executed through REST or SDKs, where client applications submit search requests that return ranked results based on relevance scoring, including options for semantic reranking to prioritize contextually similar documents. This API-driven querying ensures low-latency retrieval, with results formatted as for easy integration into web or mobile applications. Azure Cognitive Search supports heterogeneous data sources, primarily in the form of documents that can include structured, semi-structured, or unstructured content. During ingestion, particularly through tools like the Import data wizard in the Azure portal, the service performs automatic schema inference by sampling a of documents to detect field names, data types (such as Edm.String or Collection(Edm.ComplexType)), and relationships, generating an initial index schema that users can refine. This capability allows for flexible handling of varied data formats without manual schema definition from scratch, though complex nested structures may require field mappings for optimal indexing. Unlike general-purpose databases such as SQL Database, which are optimized for transactional storage, compliance, and relational queries, Azure Cognitive Search is specialized for fast and relevance-based ranking. It offloads search workloads from primary data stores by maintaining dedicated indexes that prioritize query performance over data modification operations, making it unsuitable for high-concurrency updates but ideal for read-heavy search applications where sub-second response times are critical. This distinction positions it as a complementary service rather than a replacement for databases, focusing on search-specific optimizations like tokenization and scoring profiles.

Role in Azure Ecosystem

Azure AI Search, formerly known as Azure Cognitive Search, serves as a core component within the Azure AI services portfolio, delivering scalable capabilities that enhance the intelligence of cloud-based applications. It positions itself as a bridge between raw data storage and advanced analytics, complementing services like Blob Storage for unstructured content ingestion and for data chunking and vectorization, while integrating with compute platforms such as Functions for event-driven indexing pipelines and App Service for hosting search-enabled web applications. This interoperability enables developers to build end-to-end solutions where search acts as the foundational layer for data discovery and AI-driven insights. The service depends on —previously Azure Active Directory—for authentication and authorization, supporting keyless connections via (RBAC) to enforce granular permissions on indexes, documents, and administrative operations. For operational diagnostics, it leverages Azure Monitor to collect platform metrics on query performance, indexing throughput, and resource utilization, as well as diagnostic logs for and alerting on service health. These dependencies ensure secure, observable deployments within the Azure environment, aligning with broader platform governance standards. In enterprise contexts, Azure AI Search extends its utility to power search experiences in ecosystems, such as indexing document libraries for full-text retrieval, and in 365, where it underpins cloud-powered search in commerce modules for product catalogs and knowledge mining. For custom AI applications, it exposes REST APIs for core operations like indexing and querying, alongside SDKs in .NET for managed integrations, for data science workflows, and for client-side implementations, allowing seamless incorporation into hybrid or generative AI scenarios. The evolution of the service reflects a shift from traditional IaaS and PaaS support for basic search workloads to an AI-first paradigm, incorporating vector search and retrieval-augmented generation () patterns to support modern AI agents and large language models. Scaling is facilitated through Resource Manager (ARM), which provides templates for provisioning services, managing API keys, and adjusting replicas or partitions across pricing tiers to handle varying loads without downtime.

Service Model

Platform as a Service Characteristics

Azure AI Search, formerly known as Azure Cognitive Search, operates as a fully managed (PaaS) offering, where assumes responsibility for the underlying , software updates, and maintenance. This model allows users to focus exclusively on defining search indexes, ingesting , and crafting queries without managing servers, operating systems, or hardware provisioning. The service ensures through built-in redundancy and delivers a 99.9% (SLA) for qualifying configurations, providing enterprise-grade reliability for search workloads. In contrast to (IaaS) deployments, Azure AI Search eliminates the need for provisioning, patching, or scaling hardware, which is required for self-hosted search engines like running on IaaS platforms. For instance, users deploying on Azure Virtual Machines must handle cluster management, backups, and manually, whereas Azure AI Search automates these aspects to streamline development and reduce operational overhead. Compared to other PaaS alternatives, such as Amazon OpenSearch Service, Azure AI Search integrates natively with the , offering similar managed but with tighter coupling to Azure AI services for enhanced cognitive features. Provisioning an AI Search service is straightforward and can be accomplished through the Azure portal for a graphical , the Azure CLI for scripted deployments, or Azure Resource Manager (ARM) templates for infrastructure-as-code automation. Upon creation, the service automatically provisions resources with partitioning to distribute data across storage units, enabling seamless horizontal scaling as query volumes grow. The resource model is based on search units, calculated as the product of partitions (for and indexing capacity) and replicas (for query throughput and ), which collectively determine the service's performance limits and are billed on an hourly basis.

Deployment Tiers and Scalability

Azure AI Search offers several deployment tiers, each designed to accommodate different workload sizes and performance needs, ranging from development and testing to large-scale production environments. The Free tier provides 50 MB of storage and supports up to 3 indexes, making it suitable for exploratory work but with shared resources and no scalability options. The Basic tier, intended for small production applications, offers 15 GB of storage per partition, up to 3 replicas and 3 partitions, and supports up to 15 indexes, with dedicated resources but limited throughput compared to higher tiers. Standard tiers (S1, S2, S3) provide dedicated infrastructure for enterprise workloads, with storage capacities of 160 GB, 512 GB, and 1 TB per partition respectively, supporting up to 12 replicas and 12 partitions, and up to 200 indexes in S2 and S3. The S3 High Density (HD) variant optimizes for multi-tenant scenarios with up to 3,000 indexes across 3 partitions but without indexer support. Storage Optimized tiers (L1 and L2) focus on high-volume, static data storage, offering 2 TB and 4 TB per partition respectively, with up to 12 partitions and 12 replicas, though at the cost of higher query latency. Scalability in Azure AI Search is achieved horizontally by adjusting replicas and partitions, which form search units (SU) calculated as replicas multiplied by partitions, with a maximum of 36 SUs in Standard and Storage Optimized tiers. Replicas enhance query throughput and availability by distributing read workloads across multiple instances, ideal for high queries-per-second (QPS) scenarios, while partitions increase storage capacity and support larger indexes by sharding data. For instance, adding replicas improves response times for concurrent searches, but scaling operations can take 15 minutes to over an hour depending on data volume. Service limits cap replicas and partitions at 3 for Basic and 12 for Standard and above, ensuring predictable performance without automatic scaling features. Advanced features like vector search require a billable tier (Basic or higher) due to increased storage demands, with Standard tiers recommended for production to handle the associated vector indexing overhead, which can consume up to 300 GB in S3 configurations. Best practices for deployment include starting with the tier for and testing to validate index designs and query patterns, then scaling to tiers for production based on monitored QPS and metrics via the . should prioritize query-heavy workloads by allocating more replicas early, while indexing-intensive tasks benefit from additional partitions to avoid bottlenecks.
TierStorage per PartitionMax ReplicasMax PartitionsMax IndexesPrimary Use Case
Free50 MB (shared)N/AN/A3Development/testing
Basic15 GB3315Small production
Standard S1160 GB121250General enterprise
Standard S2512 GB1212200High-volume queries
Standard S31 TB1212200Large-scale
Standard S3 HD1 TB1233,000Multi-tenant
Storage Optimized L12 TB121210High storage needs
Storage Optimized L24 TB121210Massive static data

Indexing and Data Management

Data Ingestion Methods

Azure AI Search supports two primary data ingestion models: the push model, which involves direct programmatic uploads, and the pull model, which uses automated indexers to fetch data from supported sources. These methods enable efficient population of search indexes with structured and , accommodating both and scheduled updates. In the push model, data is ingested by uploading JSON documents directly to the index using REST APIs or Azure SDKs in languages such as .NET, , or . This approach is ideal for real-time scenarios, as it allows immediate indexing without dependencies on external data sources, supporting operations like upsert (merge or upload), merge, and delete. Batches are limited to 1,000 documents or 16 MB total size, making it suitable for applications requiring fine-grained control over connectivity and update frequency. The pull model employs indexers that periodically connect to data sources to extract and load content automatically, eliminating the need for custom code in many cases. Built-in indexers support Azure-native sources including Blob Storage for unstructured files, SQL Database and for relational data, and Table Storage for semi-structured data, including external sources like Online, as well as OneLake for file indexing (generally available September 2025). For broader external integration, custom skills can extend indexers to non-native sources through calls during the ingestion pipeline. Indexers can be scheduled for recurring runs via the Azure portal or . Change detection mechanisms facilitate incremental updates in the pull model by identifying only modified or new data since the last run. , such as using timestamps, is commonly applied for sources like Azure SQL and to process deltas efficiently after an initial full load. Soft-delete detection handles removals by querying flags or markers in the source, ensuring the index remains synchronized without full rescans. These features are automatic for and configurable for other supported sources. Supported formats emphasize as the primary structure for push operations, while pull indexers parse diverse inputs including , PDF, and other text-based files from Blob Storage, with optional image extraction. The maximum size per document is approximately 16 MB, aligning with payload limits to maintain performance across tiers. During ingestion, both models can integrate enrichments like text extraction or vectorization via skillsets, enhancing content for .

Index Structure and Optimization

The index schema in Azure AI Search defines the structure of searchable content through a collection of fields, each specifying a data type and attributes that determine query behavior. Supported field types include Edm.String for text, Edm.Int32 or Edm.Double for numbers, Collection(Edm.String) for arrays, and complex types for nested structures such as addresses within hotel documents. Key attributes include searchable for enabling full-text or vector search with tokenization, filterable for exact-match filtering without tokenization, sortable for ordering results, facetable for navigation facets (limited to 32 KB for strings), and key for unique identifiers (must be a single Edm.String field). The retrievable attribute controls whether a field appears in query results, defaulting to true except for non-retrievable optimizations. For , Azure AI Search employs an that maps tokenized terms to the documents containing them, facilitating efficient term-based retrieval by scanning and matching against query tokens. fields, defined as Collection(Edm.Single) with a dimensions attribute, store embeddings for searches, supporting up to 4,096 dimensions per field to accommodate various embedding models (generally available August 2025). These fields integrate with vector search profiles to configure algorithms like HNSW for approximate nearest neighbor indexing, with recent enhancements including support for nested vector fields (preview May 2025). Optimization of the index involves analyzers, suggesters, and scoring profiles to enhance tokenization, user experience, and relevance. Analyzers process text fields during indexing and querying, using the standard Lucene tokenizer by default or custom configurations with filters for stemming, lowercasing, and stopword removal; for example, language-specific analyzers like en.microsoft support linguistic variations in English. Suggesters enable autocomplete by precomputing partial term matches from designated string fields, requiring at least three characters for activation and leveraging analyzers for token generation. Scoring profiles customize relevance by applying weights to fields (e.g., boosting a "title" field by a factor of 5) or functions based on freshness, magnitude, distance, or tags, allowing dynamic adjustments without index rebuilds. Best practices for index optimization emphasize selectivity and efficiency: limit searchable fields to essential content to reduce storage and processing overhead, as searchable string fields are limited to approximately 32 while the total document size is capped at 16 MB; filterable fields follow similar string limits but focus on exact matches without tokenization. For vectors, apply scalar quantization (up to 4x ) or binary quantization (up to 28x ) to compress embeddings, reducing usage while maintaining search accuracy through techniques like rescoring (generally available August 2024). Use facets judiciously for to avoid performance bottlenecks, and periodically review composition to prune unnecessary data, ensuring smaller indexes for faster queries.

Querying Capabilities

Search Query Syntax

Azure AI Search supports two primary query syntaxes for full-text search: a simple query parser, which is the default and suitable for basic keyword and phrase searches, and a full Lucene query syntax for more advanced operations. These syntaxes are used in the search parameter of search requests, allowing users to construct expressions that match terms in indexed documents. The simple query syntax enables straightforward searches using keywords, phrases, and basic operators. Single terms or multiple words act as implicit OR queries, matching documents containing any of the terms (e.g., budget [hotel](/page/Hotel) retrieves results with either "budget" or ""). Phrases require exact matches when enclosed in double quotes (e.g., "Roach Motel" finds only the precise sequence). Boolean-like operators include + for AND (requiring all terms, e.g., [pool](/page/Pool) + [ocean](/page/Ocean)), - for NOT (excluding terms, e.g., [pool](/page/Pool) -[ocean](/page/Ocean)), and | for explicit OR (though OR is the default). Prefix wildcards use * to match terms starting with a pattern (e.g., [azure](/page/Azure)* search for " cloud search"), limited to a maximum term length of 1000 characters. Escaping special characters is done with a backslash (e.g., luxury\+[hotel](/page/Hotel) to treat + as literal). This syntax applies for tokenization, which can be tested via the . It is limited to exact and prefix matching, without support for fuzzy or proximity searches, for which the full Lucene syntax is recommended. For complex scenarios, the full Lucene query syntax provides advanced features, enabled by setting queryType=full in the search request. Boolean operators include AND (or +), OR, and NOT (or -), allowing precise combinations (e.g., wifi AND luxury requires both terms, while wifi NOT budget excludes "budget"). Proximity searches use the tilde operator to specify word distance (e.g., "hotel airport"~5 matches the terms within 5 words of each other). Fuzzy matching approximates similar terms with ~ followed by an optional edit distance (e.g., blue~ or blue~1 for up to one edit, limited to 50 terms per query). Boosting prioritizes certain terms using the caret ^ (e.g., "recently renovated"^3 weights the phrase higher by a factor of 3). Examples like category:budget AND "recently renovated"^3 demonstrate field-specific and boosted queries. Query size limits apply, such as 8 KB for GET requests and up to 1024 clauses. Structured queries leverage OData filter expressions via the $filter parameter to apply boolean logic on index fields, independent of full-text search. Comparison operators include eq for equality (e.g., $filter=location eq 'Redmond' matches exact string values, case-sensitive by default), ne for inequality, and range operators like gt, lt, ge, le (e.g., $filter=Rating ge 4 for ratings of 4 or higher). Logical operators combine conditions: and, or, not (e.g., $filter=ParkingIncluded and Rating ge 4). Collection operators such as any and all handle complex fields (e.g., Rooms/any(room: room/BaseRate lt 200.0) filters rooms under $200). Filters are exact matches and do not perform full-text analysis. Query requests include parameters for controlling output and navigation. The top parameter limits results (e.g., "top": 7 returns the top 7 matches). uses skip to offset results (e.g., combined with top to fetch subsequent pages, like skipping 10 for the next set). The select parameter projects specific fields in responses (e.g., "select": "HotelId, HotelName, Address/StreetAddress" includes only those). applies via orderby using OData syntax (e.g., $orderby=field asc or multiple fields like $orderby=Rating desc, HotelName asc). These parameters are specified in the request or body, such as in requests to the Search Documents API.

Result Ranking and Relevance

Azure AI Search employs the BM25 algorithm as the default method for computing relevance scores in full-text search queries, ranking results based on term frequency (how often query terms appear in a document) and inverse document frequency (how rare those terms are across the entire index). This probabilistic model favors documents that contain multiple instances of the query terms while penalizing overly long documents to avoid bias toward verbose content, resulting in an unbounded score where higher values indicate greater relevance. The algorithm's parameters, such as k1 (controlling term frequency saturation) and b (document length normalization), can be tuned at the index level to fine-tune scoring behavior for specific workloads. Introduced in public preview in March 2021, semantic ranking enhances initial BM25 results by reranking the top 50 documents using Microsoft's deep learning language models to better capture query intent and semantic similarity. Since its introduction, semantic ranking has been enhanced with new language models in November 2024, query rewrite capabilities in preview, and support for integration with scoring profiles in May 2025 (preview), improving relevance for complex queries. This AI-driven stage assigns a separate @search.rerankerScore (ranging from 4.0 for high relevance to 0.0 for low) based on contextual understanding, promoting results that align more closely with user meaning beyond exact keyword matches. To enable it, users must configure a semantic profile in the index, specifying prioritized fields like titles or content with token limits, and activate it via query parameters; it is available on Basic tier and above with associated billing for queries exceeding the free tier. Hit highlighting improves result usability by applying markup to matching query terms within returned document fields, allowing users to quickly identify relevant snippets. By , Azure AI Search wraps highlighted terms in <em> tags, but custom pre- and post-tags (e.g., <b> and </b>) can be specified for styling like bolding or colorization. involves designating searchable fields in the query request (e.g., highlight=description,title), with options to limit the number of highlights per field to control response size and performance. For tailored relevance, custom scoring profiles enable developers to modify BM25 scores through field weights, tag boosts, or mathematical functions integrated into the index definition. Field boosting assigns higher weights to important attributes, such as prioritizing a "productName" field over "description" (e.g., weight of 3.0 versus 1.0), while functions like freshness can dynamically elevate recent documents based on a datetime field, applying a decaying boost over a specified duration (e.g., stronger for items updated within the last 30 days). Distance functions further customize scores by factoring in geospatial proximity, boosting results closer to a reference location (e.g., within 5 kilometers) to support location-aware searches. Up to 100 such profiles can be defined per index, with one selected per query to balance relevance without altering the core BM25 computation.

AI-Enhanced Features

Cognitive Skillsets and Enrichments

Cognitive skillsets in Azure AI Search enable the orchestration of AI-powered processing pipelines that enrich raw data during indexing, transforming unstructured content into searchable and analyzable forms. A skillset is a reusable collection of skills—modular components that perform specific enrichment tasks—attached to an indexer for execution on data from sources like Azure Blob Storage. These skills leverage Azure AI services, such as Azure AI Language for natural language processing or Azure AI Vision for image analysis, to extract insights without requiring custom code in most cases. Built-in skills form the core of skillsets, providing preconfigured functionalities like text splitting, entity recognition, optical character recognition (OCR), translation, and sentiment analysis. Recent additions include the GenAI Prompt skill (preview as of May 2025) for LLM-based field population and enhancements to the Document Layout skill for structure-aware chunking with image offset metadata. For instance, the Text Split skill divides long documents into smaller pages to facilitate downstream processing, while the Entity Recognition skill (powered by Azure AI Language) identifies and categorizes entities such as people, locations, or organizations from text, creating new enriched fields like entity lists. Custom skills extend this capability by integrating external logic, such as proprietary models or additional Azure Functions, invoked via a web API interface for tasks like specialized pattern matching. Skills can be chained sequentially, where the output of one (e.g., OCR extracting text from images) serves as input to another (e.g., sentiment analysis on the extracted text), enabling complex knowledge mining workflows. Enrichments refer to the structured outputs generated by skillsets, which can be projected into the search index as merged fields or new entities to enhance query . For example, skills normalize multilingual content for global search, while OCR on scanned documents or images unlocks text for indexing, and adds contextual scores (positive, neutral, negative) to customer reviews. These projections support applications like extracting key phrases from articles or generating text embeddings for semantic understanding, with outputs optionally stored in a knowledge store for further analysis. To integrate AI services, a multi-service resource is attached to the skillset, handling billing for calls, and keyless options are available for secure, managed identity-based access. Skillsets are defined in JSON format, specifying skill types, inputs, outputs, and dependencies, then attached to an indexer via its to run during data ingestion. This declarative approach allows pipelines to process documents in an in-memory enrichment tree, with mappings directing outputs to index fields. For efficiency, especially in large-scale or iterative indexing, an enrichment cache—stored in Storage—persists processed outputs, enabling incremental updates by reprocessing only changed portions and avoiding redundant AI calls, which reduces costs and execution time. Azure AI Search provides advanced capabilities for and , enabling similarity-based retrieval of content represented as numerical embeddings and enhanced understanding of query intent. Vector search facilitates approximate nearest neighbor (ANN) matching over high-dimensional vectors derived from text, images, or other data types, while applies reranking to prioritize results based on contextual using models. These features support hybrid queries that combine traditional keyword matching with AI-driven techniques, improving accuracy in diverse applications such as recommendation systems and knowledge retrieval. Vector search in AI Search operates by indexing embeddings using the Hierarchical Navigable Small World (HNSW) algorithm, which enables efficient approximate k-nearest neighbors (kNN) queries for similarity detection. Supported similarity metrics include cosine distance, allowing for precise measurement of proximity regardless of . fields support up to 4096 dimensions (as of August 2025), enabling compatibility with advanced embedding models. Each document can contain up to 1,000 fields per limits, but for multi-vector support, the total is limited to 100 vectors across all fields (public preview as of 2025), accommodating scenarios where multiple embeddings represent different aspects of the content. Embeddings are typically generated at time through integration with cognitive skillsets, such as those leveraging models. Semantic search enhances initial query results by applying a hybrid reranker powered by Microsoft's multilingual deep learning models, akin to BERT architectures, to rescore the top 50 documents based on semantic fit. This reranking process assigns a secondary score ranging from 4.0 (highly relevant) to 0.0 (irrelevant), boosting overall relevance without requiring vector embeddings. It supports query rewriting to generate up to 10 variants for broader coverage and can produce captions or answers extracted from indexed content to aid user comprehension. Multimodal support, introduced in 2024 with enhancements in 2025, extends and to handle combined text and vector representations of images and videos through integration with AI Vision. This allows extraction of visual content via skills like Document Extraction, followed by generation of descriptive embeddings that enable hybrid queries across modalities, such as searching for textual descriptions of embedded diagrams in PDFs. These capabilities are particularly suited for retrieval-augmented generation () workflows, where and retrieve relevant content chunks to ground large language models (LLMs) in data, ensuring factual responses. is maintained through query filters on text or numeric fields, restricting retrieval to authorized subsets of the index while preserving performance at scale.

Integrations and Use Cases

Compatibility with Azure Services

AI Search supports direct connectors to several data sources for seamless data ingestion. These include Blob Storage, which allows extraction of and content from blobs into documents with change detection capabilities; for via its SQL API for querying and indexing items; and Azure SQL Managed Instance for pulling relational data into search indexes. For non- sources, integration with Azure Logic Apps enables workflows to index data from external systems like or third-party services. In the AI stack, Azure AI Search integrates with Azure OpenAI for generating embeddings in enrichment pipelines, using the Azure OpenAI Embedding skill to connect to deployed models for vectorization during indexing. Additionally, it leverages Azure AI Document Intelligence through skills like Document Layout to process and extract text, tables, and layout information from PDFs and other document formats, enhancing searchability of unstructured content. Developer tools for AI Search include SDKs for .NET (via Azure.Search.Documents library), (via azure-search-documents), and (via azure-search-documents), which facilitate index management, querying, and data operations programmatically. APIs provide a foundational interface for all interactions, supporting HTTP-based calls for indexing and search. Integration with Power BI enables search-enabled dashboards by connecting to knowledge stores or using connectors to query indexes directly in reports. For orchestration, Azure Functions can be used to create custom indexers, allowing outbound connections for data processing via built-in authentication like Easy Auth. Azure Synapse Analytics supports feeding analytics-processed data into Azure AI Search indexes through pipelines in Azure Data Factory or Synapse, enabling large-scale data movement and transformation before indexing.

Applications in RAG and AI Workloads

Azure AI Search plays a pivotal role in Retrieval-Augmented Generation (RAG) patterns by enabling the indexing of enterprise data from diverse sources such as databases, files, and storage, which is then retrieved through vector or hybrid search to augment prompts in large language models (LLMs). This approach grounds AI responses in authoritative content, enhancing accuracy and reducing reliance on potentially outdated model training data. For instance, in chatbot applications like extensions to Microsoft Copilot, Azure AI Search supports classic RAG pipelines where a user query triggers retrieval of relevant documents or vectors, which are injected into the LLM prompt for context-aware generation. Additionally, agentic retrieval, introduced in 2025, decomposes complex queries into subqueries executed in parallel by LLMs, providing structured responses with citations for downstream applications. In scenarios, Azure AI Search facilitates faceted , allowing users to refine results through interactive filters on attributes like categories, prices, or locations, which is particularly valuable in platforms. The Careers Portal exemplifies this, handling over 50,000 daily searches across thousands of job listings and millions of applications annually with dynamic facet updates, hierarchical filtering, and geospatial queries to match candidates to roles based on proximity. Geospatial search extends to location-based applications, such as retail apps that combine vector embeddings with geographic filters to deliver personalized recommendations, ensuring low-latency responses even under high loads. For AI agents, Azure AI Search's semantic search capabilities power knowledge bases in specialized domains like legal and finance, where understanding intent beyond keywords is crucial. At UBS, the Legal AI Assistant leverages semantic ranking to query 26 million multilingual documents, identifying related concepts and synonyms to accelerate legal research while enforcing document-level security. Similarly, DraftWise employs Azure AI Search with embedding models for contract review, retrieving precise clauses via semantic similarity and providing verifiable citations, which improves search efficiency by 30%. In 2025 updates, agentic retrieval enhancements support multi-agent workflows by enabling knowledge agents to orchestrate subqueries and supply grounded data to collaborative AI systems, facilitating agent-to-agent interactions in complex tasks. Case studies highlight Azure AI Search's integration with Microsoft Fabric for unified analytics and search, where indexed data from Fabric's lakehouses augments generative AI workflows to minimize errors. As of August 2025, improved indexing from Fabric lakehouses supports agentic workflows. This combination supports implementations that ground responses in datasets, helping to reduce hallucinations in generative AI outputs through retrieval of verified before response generation. Such integrations enable scalable, secure AI applications across industries, demonstrating measurable improvements in reliability and user trust.

Security and Administration

Access Controls and Compliance

Azure AI Search implements authentication through API keys or Microsoft Entra ID integration. API keys provide two types: query keys for read-only access to search operations and admin keys for full management, with a limit of two admin keys per service to minimize exposure risks. Alternatively, enables (RBAC) for identity-based , supporting and group assignments along with managed identities for automated, keyless from other services. As of July 2025, user-assigned managed identities are generally available, allowing assignment of identities to search services via the 2025-05-01 and Azure portal. Authorization in Azure AI Search extends beyond service-level controls to include granular data protection. Built-in RBAC roles such as Search Index Data Contributor and Search Index Data Reader govern data plane operations like indexing and querying, while control plane roles like Search Service Contributor manage resource provisioning. Document-level security is enforced via query-time filters that apply field-level permissions, such as access control lists (ACLs) or role-based rules, ensuring users only retrieve authorized documents based on their identity tokens. Network-level authorization includes IP restrictions to limit access from specified IP ranges and private endpoints for secure connectivity within a virtual network, preventing public internet exposure. As of July 2025, Network Security Perimeter is generally available to control network access using Azure Virtual Network Manager. Additionally, shared private link support for Azure AI service skills enables private connections and was introduced in November 2024. Compliance features in Azure AI Search align with major regulatory standards to support enterprise data protection. The service holds certifications for SOC 1, SOC 2, SOC 3, ISO 27001, and GDPR, enabling organizations to meet audit requirements for information security management. As of September 2025, support for Azure Confidential Computing is generally available, allowing data to be processed in use on confidential virtual machines for enhanced security and compliance, with a 10% surcharge. Data residency is maintained by processing and storing data within the selected Azure region, with options for geographic boundaries to comply with sovereignty laws. Encryption protects data at rest using Microsoft-managed keys (AES-256) or customer-managed keys via Azure Key Vault, and in transit over TLS 1.2 or higher via HTTPS. Auditing capabilities in AI Search facilitate compliance monitoring through integration with Monitor. Resource logs capture query activities, indexing operations, and service events, which can be routed to Log Analytics workspaces for analysis. Log retention defaults to 30 days for metrics but extends up to two years in Log Analytics for detailed query logs, allowing for extended auditing without custom storage.

Monitoring and Maintenance

Azure AI Search provides comprehensive monitoring capabilities through integration with Monitor, which collects and aggregates platform metrics and resource logs to assess service health, performance, and resource utilization. Key metrics include (QPS), which measures the average rate of search queries processed; average search , tracking the duration of query execution in seconds; storage size, indicating the total consumed by indexes; and the percentage of throttled search queries, reflecting dropped requests due to limits. These metrics are available at a one-minute and can be visualized in Azure dashboards or exported for analysis. Alerts can be configured in Monitor to notify administrators of potential issues, such as when throttled queries reach 90% of capacity thresholds, enabling proactive of replicas or partitions to maintain . For instance, alerts on exceeding a defined or QPS approaching service limits help prevent . Activity log alerts further control-plane events, like API key rotations or service operations. Diagnostic settings allow enabling resource logs for detailed auditing of operations, including query executions (e.g., search, suggest, ) and indexer activities (e.g., status checks, ). These logs, categorized under OperationLogs, capture request details, response times, and error codes, and can be routed to Monitor Logs (via Log Analytics workspace), Azure Storage, or Event Hubs for long-term retention and querying with Kusto Query Language (KQL). Exporting to Log Analytics facilitates advanced analysis, such as correlating query latency spikes with concurrent indexing workloads. Maintenance practices involve regular indexer status checks through portal views or KQL queries to ensure and detect failures, such as partial indexing errors (HTTP 207 responses). Query is supported via Search Explorer in the portal, where users can test and iterate on queries to optimize scoring and reduce , often by refining filters or analyzers. For backups, administrators can use provided .NET samples to export index schemas and document snapshots as files, facilitating restoration to another service instance. Troubleshooting common issues, such as in indexing—where updates may take seconds to minutes to propagate across replicas—is addressed by incorporating timestamps in queries to recent documents and logs for shard merging delays that cause temporary spikes. Throttling (HTTP 503 errors) is resolved by resources based on trends, while network-related delays are isolated using client headers to measure elapsed times.

References

  1. [1]
    What's Azure AI Search? - Microsoft Learn
    Jul 18, 2025 · Azure AI Search is a scalable search infrastructure that indexes heterogeneous content and enables retrieval through APIs, applications, and AI agents.Features of Azure AI Search · What's new in Azure AI Search · Choose a service tier
  2. [2]
    What's new in Azure AI Search | Microsoft Learn
    Learn about the latest updates to Azure AI Search functionality, docs, and samples. Note: Preview features are announced here.
  3. [3]
  4. [4]
    Azure Search is now generally available - IT-Online
    Azure Search is now generally available. Azure Search is a search as a service solution that helps developers to build sophisticated search ...
  5. [5]
    Announcing Cognitive Search: Azure Search + cognitive capabilities
    Today we are announcing Cognitive Search, an AI-first approach to content understanding. Cognitive Search is powered by Azure Search with ...
  6. [6]
    Data plane REST API versions (Azure AI Search) - Microsoft Learn
    Sep 24, 2025 · Lists the generally available and preview versions of the Search REST APIs for Azure AI Search.Missing: renamed | Show results with:renamed
  7. [7]
    Azure AI Search-Retrieval-Augmented Generation
    Azure AI Search, formerly Azure Cognitive Search, is a knowledge retrieval system that powers RAG for agents. Deliver the best AI experience to every user, ...
  8. [8]
    Vector search - Azure AI Search | Microsoft Learn
    Jul 8, 2025 · Vector search is an information retrieval approach that supports indexing and querying over numeric representations of content.Hybrid search · Create a vector index · Create a Vector Query · Agentic RetrievalMissing: 2023 | Show results with:2023
  9. [9]
    Raising the bar for RAG excellence: query rewriting and new ...
    We are announcing 2 major updates that raise the bar again for RAG retrieval: a new capability, query rewriting, and a new model for semantic ranker.
  10. [10]
    Multimodal Search Concepts and Guidance - Azure AI Search
    May 29, 2025 · In Azure AI Search, multimodal search natively supports the ingestion of documents containing text and images and the retrieval of their content ...Why Use Multimodal Search? · Image Verbalization Followed... · Tutorials And SamplesMissing: 2024-2025 | Show results with:2024-2025
  11. [11]
    Introducing new multimodal functionality in Azure AI Search
    May 19, 2025 · The new multimodal wizard in Azure AI Search changes that. It simplifies setup with built-in support for: Image extraction and normalization.Introduction · Why Multimodality Matters... · Inside The Multimodal...
  12. [12]
    Azure OpenAI On Your Data - Microsoft Learn
    Sep 24, 2025 · As of September 2024, the ingestion APIs switched to integrated vectorization. This update does not alter the existing API contracts. Integrated ...
  13. [13]
    Microsoft Ignite 2024 Book of News
    Nov 19, 2024 · Copilot Studio and Azure AI announce new tools to build agents. Several key updates for Microsoft Copilot Studio will give makers and developers ...
  14. [14]
    Announcing updates to Azure AI Search to help organizations build ...
    Apr 4, 2024 · Azure AI Search has significantly raised vector and storage capacity, offering customers greater scalability, high performance, and more data per dollar.
  15. [15]
    Azure AI Search - Indexer overview - Microsoft Learn
    Jun 23, 2025 · An indexer in Azure AI Search is a crawler that extracts textual data from cloud data sources and populates a search index using field-to-field mappings.Indexer Scenarios And Use... · Stages Of Indexing · Basic Workflow<|control11|><|separator|>
  16. [16]
    Import data wizards in the Azure portal - Microsoft Learn
    Sep 16, 2025 · Azure AI Search has two wizards that automate indexing, enrichment, and object creation for various search scenarios: The Import data wizard ...
  17. [17]
    Search index overview - Azure AI Search | Microsoft Learn
    Jun 20, 2025 · In Azure AI Search, a search index is your searchable content, defined by a schema, and contains search documents, like a table with rows.Vector store · Create an index · Microsoft Ignite
  18. [18]
    Choose a search data store - Azure Architecture Center
    Mar 20, 2025 · This article compares technology choices for search data stores in Azure. A search data store is used to create and store specialized indexes for performing ...What are your options when... · Key selection criteria
  19. [19]
    Set up an indexer connection to Azure functions using "Easy Auth"
    Jan 23, 2025 · Azure Function apps are a great solution for hosting Custom Web APIs that an Azure AI Search service can use either to enrich content ingested ...
  20. [20]
    Connect to Azure AI Search using roles - Microsoft Learn
    Azure provides a global authentication and role-based access control through Microsoft Entra ID for all services running on the platform.Built-In Roles Used In... · Assign Roles · Create A Custom Role
  21. [21]
    Connect your app to Azure AI Search using identities - Microsoft Learn
    Jun 17, 2025 · You can set up a keyless connection to Azure AI Search that uses Microsoft Entra ID and roles for authentication and authorization.Prerequisites · Install Azure Identity client library
  22. [22]
    Monitor Azure AI Search | Microsoft Learn
    Jul 25, 2025 · Azure AI Search can be monitored using Azure Monitor, which collects metrics and logs. Platform metrics track performance, and resource logs ...Resource types · Data storage
  23. [23]
    SharePoint in Microsoft 365 indexer (preview) - Azure AI Search
    SharePoint in Microsoft 365 indexer support is in public preview. It's offered "as-is", under Supplemental Terms of Use and supported on best effort only.
  24. [24]
    Cloud-powered search overview - Commerce | Dynamics 365
    Apr 16, 2024 · This article gives an overview of cloud-powered search in Microsoft Dynamics 365 Commerce ... Implement knowledge mining with Azure AI Search.Configure cloud-powered... · Browse and search
  25. [25]
    Azure AI Search libraries for .NET - Microsoft Learn
    Although Azure AI Search is renamed, many API descriptions continue to use the former name, "Azure Cognitive Search". API string descriptions will get ...
  26. [26]
    Quickstart: Deploy Using an ARM Template - Azure AI Search
    In this quickstart, you use an Azure Resource Manager (ARM) template to deploy an Azure AI Search service in the Azure portal. An Azure Resource Manager ...Prerequisites · Review the template
  27. [27]
    Create an Azure AI Search service in the Azure portal - Microsoft Learn
    Sep 29, 2025 · Learn how to set up an Azure AI Search service in the Azure portal. Choose a resource group, region, and pricing tier.
  28. [28]
    Design patterns for multitenant SaaS applications and Azure AI Search
    Azure AI Search has a 99.9% SLA. Global footprint: Multitenant applications often need to serve tenants who are distributed across the globe. Scalability ...
  29. [29]
    Compare Java application hosting options on Azure - Microsoft Learn
    When you host Java applications on Azure, plan to host your related Solr and Elasticsearch instances. Alternatively, consider migrating to Azure AI Search.Platform · Supportability · Build Or Migrate Java...
  30. [30]
    Data and AI - Azure Architecture Center | Microsoft Learn
    Oct 31, 2025 · Amazon OpenSearch uses Kibana for search and visualization. AI Search provides intelligent full-text search. Azure Data Explorer uses KQL to ...Data Warehousing · Real-Time Data Processing · Ai ServicesMissing: PaaS | Show results with:PaaS
  31. [31]
    Service Limits for Tiers and SKUs - Azure AI Search | Microsoft Learn
    This limit was introduced in api-version=2019-05-06 and applies to complex collections only, and not to string collections or to complex fields. 4 For most ...
  32. [32]
    Estimate capacity for query and index workloads - Azure AI Search
    Sep 26, 2025 · Learn how capacity is structured and used in Azure AI Search, and how to estimate the resources needed for indexing and query workloads.Concepts: search units... · When to add capacityMissing: enhanced | Show results with:enhanced
  33. [33]
    Data import and data ingestion - Azure AI Search | Microsoft Learn
    In Azure AI Search, queries execute over your content that's loaded into a search index. This article describes the two basic workflows for populating an ...
  34. [34]
    AI enrichment concepts - Azure AI Search | Microsoft Learn
    Oct 6, 2025 · In Azure AI Search, AI enrichment refers to integration with Azure AI services to process content that isn't searchable in its raw form.
  35. [35]
    Analyzers for linguistic and text processing - Azure AI Search
    ### Summary of Analyzers in Azure AI Search
  36. [36]
  37. [37]
    Add scoring profiles - Azure AI Search - Microsoft Learn
    Sep 29, 2025 · In this article, learn how to specify and assign a scoring profile that boosts a search score based on parameters that you provide.BM25 relevance scoring · Use Scoring Profiles with... · Microsoft Ignite
  38. [38]
    Performance tips - Azure AI Search | Microsoft Learn
    This article is a collection of tips and best practices for boosting query and indexing performance for keyword search.
  39. [39]
    Compress vectors using quantization - Azure AI Search
    Azure AI Search supports scalar and binary quantization for reducing the size of vectors in a search index. Quantization is recommended because it reduces both ...Prerequisites · Supported quantization...
  40. [40]
    Azure AI Search - Query types - Microsoft Learn
    Azure AI Search supports query constructs for a broad range of scenarios, from free-form text search, to highly specified query patterns, to vector search.Missing: core | Show results with:core
  41. [41]
    Create a full text query - Azure AI Search - Microsoft Learn
    The full Lucene query syntax, enabled when you add queryType=full to the request, is based on the Apache Lucene Parser. Full syntax and simple syntax ...Key Points · Choose A Query Type: Simple... · Choose Query Methods
  42. [42]
    Simple query syntax - Azure AI Search - Microsoft Learn
    Dec 10, 2024 · This article is the query syntax reference for the simple query parser. Query syntax for both parsers applies to query expressions passed in the search ...
  43. [43]
    Lucene query syntax - Azure AI Search - Microsoft Learn
    Dec 11, 2024 · You can opt for the full Lucene Query Parser syntax for specialized query forms: wildcard, fuzzy search, proximity search, regular expressions.Syntax Fundamentals · Boolean Operators · Wildcard SearchMissing: 2014 | Show results with:2014
  44. [44]
    Azure AI Search - OData filter reference - Microsoft Learn
    OData language reference and full syntax used for creating filter expressions in Azure AI Search queries.Missing: 2014 | Show results with:2014
  45. [45]
    OData language overview - Azure AI Search - Microsoft Learn
    This article provides an overview of the OData expression language used in $filter, $order-by, and $select expressions for keyword search in Azure AI Search.Missing: 2014 | Show results with:2014
  46. [46]
    BM25 relevance scoring - Azure AI Search - Microsoft Learn
    Explains the concepts of BM25 relevance and scoring in Azure AI Search, and what a developer can do to customize the scoring result.Add scoring profiles · What's Azure AI Search? · Shape search results
  47. [47]
    Configure BM25 relevance scoring - Azure AI Search | Microsoft Learn
    In this article, learn how to configure the BM25 relevance scoring algorithm used by Azure AI Search for full text search queries.Missing: semantic | Show results with:semantic
  48. [48]
    Semantic ranking - Azure AI Search | Microsoft Learn
    Semantic ranker is a feature that measurably improves search relevance by using Microsoft's language understanding models to rerank search results.Relevance in vector search · Configure semantic ranker
  49. [49]
    Bringing more meaningful results to Azure Cognitive Search
    Mar 2, 2021 · Note: As of November 2023, semantic search is "semantic ranker", and Azure Cognitive Search is "Azure AI Search".Introducing Semantic Search... · Semantic Ranking · Get Started Today!
  50. [50]
    Shape search results - Azure AI Search - Microsoft Learn
    May 29, 2025 · This article explains search results composition and how to shape full text search results to fit your scenarios.Number Of Results In The... · Ordering Results · Hit Highlighting
  51. [51]
    Skillset concepts - Azure AI Search - Microsoft Learn
    Jul 11, 2025 · Skillsets are used to apply AI processing to indexing pipelines in Azure AI Search. Learn important concepts and details about skillset ...Missing: date | Show results with:date
  52. [52]
    Skills reference - Azure AI Search | Microsoft Learn
    This article describes the skills in Azure AI Search that you can include in a skillset to access external processing.
  53. [53]
    Attach Azure AI services to a skillset - Microsoft Learn
    Learn how to attach an Azure AI services resource to an AI enrichment pipeline in Azure AI Search.Missing: introduction | Show results with:introduction
  54. [54]
  55. [55]
    Data sources gallery - Azure AI Search | Microsoft Learn
    Generally available data sources by Azure AI Search · Azure Blob Storage · Azure Table Storage · Azure Data Lake Storage Gen2 · Azure Cosmos DB for NoSQL · Azure SQL ...Missing: initial 2014 2015
  56. [56]
    Azure OpenAI Embedding skill - Azure AI Search | Microsoft Learn
    The Azure OpenAI Embedding skill connects to an embedding model deployed to your Azure OpenAI resource or Azure AI Foundry project to generate embeddings ...Prerequisites · odata.type
  57. [57]
    Document Layout skill - Azure AI Search | Microsoft Learn
    The Document Layout skill uses Azure AI Document Intelligence to analyze documents, supporting formats like PDF, .JPEG, .DOCX, and outputs text chunks and ...Limitations · Supported regions
  58. [58]
    Azure AI Search client library for Python | Microsoft Learn
    Oct 9, 2025 · Azure AI Search (formerly known as "Azure Cognitive Search") is an AI-powered information retrieval platform that helps developers build rich ...
  59. [59]
    Java Samples - Azure AI Search | Microsoft Learn
    Sep 23, 2025 · These samples use the Azure AI Search client library for the Azure SDK for Java, which you can explore through the following links.
  60. [60]
    Azure AI Search REST API reference - Microsoft Learn
    Sep 24, 2025 · The APIs documented in this section provide access to operations on search data, such as index creation and population, document upload, and queries.Quickstart: Full-Text Search · HTTP status codes · Naming rules
  61. [61]
    Connect to a knowledge store with Power BI - Azure AI Search
    Sep 3, 2025 · In this article, learn how to connect to and query a knowledge store using Power Query in the Power BI Desktop app.Connect to Azure Storage · Set up tablesMissing: integration | Show results with:integration
  62. [62]
    Copy data to Search index - Azure Data Factory & Azure Synapse
    Jan 5, 2024 · This article outlines how to use the Copy Activity in an Azure Data Factory or Synapse Analytics pipeline to copy data into Azure AI Search index.
  63. [63]
    Retrieval Augmented Generation (RAG) in Azure AI Search
    Learn how generative AI and retrieval augmented generation (RAG) patterns are used in Azure AI Search solutions.Querying in Azure AI Search · Azure OpenAI On Your Data · Microsoft IgniteMissing: core | Show results with:core
  64. [64]
    Agentic Retrieval - Azure AI Search - Microsoft Learn
    Oct 15, 2025 · Programmatically, agentic retrieval is supported through a new Knowledge Agents object in the 2025-08-01-preview and 2025-05-01-preview data ...Full text search · Quickstart · Tutorial: Build an agentic...Missing: scale | Show results with:scale
  65. [65]
    Microsoft Careers Portal – Efficient Faceted Navigation with Azure AI ...
    Dec 19, 2024 · Faceted navigation is used for self-directed drilldown filtering on query results in a search app, where your application offers form controls ...Missing: geospatial | Show results with:geospatial
  66. [66]
    UBS enhances in-house legal search using Azure AI ... - Microsoft
    Mar 25, 2025 · UBS built the Legal AI Assistant (LAIA) to help employees pinpoint phrases, clauses, and paragraphs using natural language and semantic similarity, rather than ...
  67. [67]
    DraftWise helps lawyers reduce tedium, focus on strategy with Azure ...
    May 19, 2025 · DraftWise's Smart Draft and Markup solutions give lawyers more time for strategic client work. Developers boosted productivity by about 60% while improving ...
  68. [68]
    Making Your Data Smarter with Microsoft Fabric and Azure AI | by ...
    Aug 20, 2025 · This guide outlines few solution architectures that bridge Microsoft Fabric and Azure AI, helping teams build smarter pipelines, copilots, and ...
  69. [69]
    RAG on Azure: A Practical Guide to Production-Ready AI Systems
    Jul 25, 2024 · RAG reduces hallucinations by up to 30% by grounding responses in retrieved documents rather than relying solely on pre-trained knowledge. The ...<|separator|>
  70. [70]
    Connect using API keys - Azure AI Search - Microsoft Learn
    Azure AI Search supports both identity-based and key-based authentication for connections to your search service. An API key is a unique string composed of ...
  71. [71]
    Document-level access control - Azure AI Search - Microsoft Learn
    Aug 27, 2025 · Azure AI Search supports document-level access control, enabling organizations to enforce fine-grained permissions at the document level.Approaches for document... · Pattern for security trimming...
  72. [72]
    Security in Azure AI Search - Microsoft Learn
    Sep 25, 2025 · Azure AI Search provides comprehensive security controls across network access, data access, and data protection to meet enterprise ...
  73. [73]
    Manage data retention in a Log Analytics workspace - Azure Monitor
    Sep 9, 2025 · By default, all tables in a Log Analytics workspace retain data for 30 days, except for log tables with 90-day default retention. Tables with ...
  74. [74]
    Azure AI Search monitoring data reference - Microsoft Learn
    This article contains important reference material you need when you monitor Azure AI Search.
  75. [75]
    Analyze performance - Azure AI Search | Microsoft Learn
    This article describes the tools, behaviors, and approaches for analyzing query and indexing performance in Azure AI Search.Develop baseline numbers · Use resource logging
  76. [76]
    Back up and restore an Azure AI Search index - Code Samples
    Apr 9, 2025 · This application copies an index from one service to another, creating JSON files on your computer with the index schema and documents.Missing: snapshots | Show results with:snapshots