Using the Azure Search service from JavaScript


The new Azure Search service is a nice ‘search as a service’ offering from Microsoft. Just add your documents, and then run some queries. It’s exposed as a REST API which talks JSON :¬D

It has a free tier, limited to 3 indexes and 10,000 documents, but you can of course start paying the money and index more stuff.

I just wrote a JavaScript client (I couldn’t find a Microsoft one) to connect to the API, let’s explore how we can use it.

Note this article assumes you are familiar with writing JavaScript for node.js and the browser.

Creating the Search Service

First, open up the new Azure Portal and go to NEW -> Search and enter some details in.

You’ll need to switch to the free tier, otherwise you’ll start clocking up a bill.

We’ll start off using Node.js, as only a few of the features (quite correctly) can be used from the browser.

Once it’s created go to the properties and keys sections of the search service blade, and make a note of your url and an admin key.


First install the package:

$ npm install azure-search

Creating the Client

Now let’s write write some JavaScript in node to create an index.

var AzureSearch = require('azure-search');
var client = AzureSearch({
    url: "",

Creating the Index

Now we have a client, we can create an index in the Search Service. To do this, we need to create a schema, which will tell the service what kind of data we want to store and search. There’s more information about how to create the schema in the Microsoft documentation, but for a simple example, I’ll have some text, and a key that I’ll use to refer to the text.

var schema = { 
  name: 'myindex',
   [ { name: 'id',
       type: 'Edm.String',
       searchable: false,
       filterable: true,
       retrievable: true,
       sortable: true,
       facetable: true,
       suggestions: false,
       key: true },
     { name: 'description',
       type: 'Edm.String',
       searchable: true,
       filterable: false,
       retrievable: true,
       sortable: false,
       facetable: false,
       suggestions: true,
       key: false } ],
  scoringProfiles: [],
  defaultScoringProfile: null,
  corsOptions:  { allowedOrigins: ["*"]} };

client.createIndex(schema, function(err, schema){
  if (err) console.log(err);
  // schema created

Note that at the bottom of the file there’s a corsOptions setting which sets allowedOrigins to ["*"]. We’ll be using this later to access the index from the browser.

Populating the Index

Now we’re ready to start adding documents to the index. In the schema we specified id and description fields. So we just need to supply an object with these fields.

var document = {
  id: "document1",
  description: "this is a document with a description"
client.addDocuments('myindex', [document], function(err, confirmation){
  if (err) console.log(err);
  // document added

In fact we can send through a batch of document through at once, by adding more items to the array.

Querying the Index

We can query the index from node, but the Search Service supports CORS, which allows us to query directly from the browser without having to stand up any of our own server-side infrastructure. This is where the CORS settings came in when we created the schema.

One thing to be careful of; don’t put your Search Service admin key in a public web page. Instead, go back to the portal and use a query key (see the manage keys button when you’re looking at the keys).


Now we can create a web page where we can call the index, and pass in some search queries. To do this we need to add a reference to azure-index.min.js (or use browserify, and just require ‘azure-index’).

        <script src="azure-search.min.js"></script>

        var client = AzureSearch({
          url: "",
        });'myindex', {search:'document'}, function(err, results){
          // results is an array of matching documents


Note that from the browser, only search, suggest, lookup and count will work.

For more information, please consult the Microsoft documentation and the documentation for the module.


The search service looks quite powerful. We’ve only scratched the surface of the options here. I’m keen to combine the search service with a trace listener, so you can index your application’s logging.

It’s great to see Microsoft move away from the awkward API conventions used for the storage system, which included complicated header signing and XML. This JSON approach with a simple API key as a header is nice and simple.

It’s also great to see CORS support our of the box, which makes it easy to consume this service directly from the browser.

Personally I think API version number looks out of place on the URL, and would be better as a header, but maybe that’s just me.

I also would prefer not to have to specify my schema. I’d like to just throw JSON objects at the service, and then be able to query on any of the fields, but I guess that’s DocumentDB!