Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • Richard 4:31 pm on December 10, 2014 Permalink |  

    I’m now blogging over here 

    Sorry, I got fed up with wordpress, and moved to Jekyll on GitHub: http://richorama.github.io/

     
  • Richard 4:50 pm on December 9, 2014 Permalink |  

    Running Julia on Azure Websites 

    Warning: This is a complete hack

    screenshot

    TL;DR

    A node process hosts the julia server, and passes the name of the named pipe to it as an argument.

    Some Background on IISNode

    Azure websites hosts node.js apps with IISNode. This basically does two things, activate the node process, and forward HTTP requests to is using a named pipe.

    Websites doesn’t restrict you to hosting JavaScript files, in fact you can start any old binary. I have had some success in hosting Go on Websites before (using node to forward requests using CGI).

    Julia

    Julia is a language that’s been around for a couple of years. It’s general purpose, with a strength in mathematical programming. I really like the syntax, and it seems to fit with the way I think about programming.

    Julia has a web stack, with Morsel.jl as the ‘Sinatra like’ web framework. I’m playing around with some of my own ideas, built on the same stack. I’ve called it Jolt.jl.

    Julia sits on the same networking library (libuv) as node, so I thought it would be simple to get it running on Azure. I was wrong.

    Hosting on Azure Websites

    Let’s go through the steps I took to get it working.

    • You need to bring the Julia runtime with you, so copy the bin and lib folder from %HOME_DIRECTORY%\AppData\Local\Julia-0.3.2 into your repository.
    • You need to bring the packages too, so let’s copy them in as well from your %HOME_DIRECTORY%\.julia directory.
    • Delete all the .git folders from the packages, and remove all the .gitignore files.
    • We need to tell Julia to look locally for the packages, so you’ll need to add this line to the start of your program push!(LOAD_PATH, ".") (we’ll do this in a bit).
    • You need to rewrite part of the HTTPServer module, so it uses PipeServer instead of TcpServer. This allows us to respond to requests on the named pipe. This will look something like this:
    immutable HttpHandler
        handle::Function
        sock::Base.UVServer
        events::Dict
    
        HttpHandler(handle::Function) = new(handle, Base.PipeServer(), defaultevents)
    end
    
    • You’ll need to write a new run method, to bind your named pipe to the PipeServer:
    export run_pipe
    
    function run_pipe(server::Server, pipe::ASCIIString)
        id_pool = 0 # Increments for each connection
        sock = server.http.sock
        websockets_enabled = server.websock != nothing
        Base.uv_error("listen", !Base.bind(sock::Base.PipeServer, pipe))
        listen(sock)
        event("listen", server, 0)
    
        while true # handle requests, Base.wait_accept blocks until a connection is made
            client = Client(id_pool += 1, accept(sock))
            client.parser = ClientParser(message_handler(server, client, websockets_enabled))
            @async process_client(server, client, websockets_enabled)
        end
    end
    
    • Remove all traces of GnuTLS from HttpServer (remove the using statement and the run_https function). It’s more trouble than it’s worth!
    • Fix up paths in any deps.jl files which point to binaries on your local machine, and set to "D:\home\site\wwwroot\* instead.
    • Now you can write a program. I used my Jolt.jl framework, but you can use what you like…
    push!(LOAD_PATH, ".")
    
    using HttpServer
    using Jolt
    using JoltView
    using JoltJson
    
    app = jolt()
    
    app.get("/") do req, res, ctx
        "hello world"
    end
    
    http = HttpHandler(app.dispatch) 
    server = Server(http)
    run_pipe(server, ASCIIString(ARGS[1].data))
    
    • My attempts to start the program directly from IISNode failed. Instead I wrote a node app (server.js) to start the Julia App.
    var spawn = require('child_process').spawn;
    console.log("starting julia");
    
    var env = process.env;
    env["HOMEDRIVE"] = "C:";
    env["HOMEPATH"] = "\home\site\wwwroot";
    
    var julia = spawn('bin\julia.exe', 
        ["server.jl", process.env.port], 
        { 
            env : env,
            cwd : process.cwd()
        },
        function callback(error, stdout, stderr){
    
    });
    
    julia.stdout.on('data', function (data) {
      console.log(data);
    });
    
    julia.stderr.on('data', function (data) {
      console.log(data);
    });
    
    julia.on('close', function (code) {
      console.log('Julia exited with code ' + code);
    });
    
    • Note that it’s necessary to add a couple of missing environment variables when starting the process.
    • Push to Azure Websites and stand back!

    Alternatively, you can just clone my repo which seems to work.

     
  • Richard 10:14 am on September 5, 2014 Permalink |  

    Redirecting to HTTPS in Node.js on Azure Websites 

    If you’ve got a node.js website running in Azure Websites, you get SSL by default, or if you bring your own domain name, you can easily set up SSL with your own certificate.

    The default behaviour is that the website serves both HTTP and HTTPS requests, but what if you want to switch users automatically over to HTTPS (i.e. disable HTTP).

    The normal approach when using express.js is to use some middleware which detects the protocol and redirects accordingly. Like this:

    // this doesn't work in Azure Websites
    function requireHTTPS(req, res, next) {
        if (!req.secure) {
            return res.redirect('https://' + req.get('host') + req.url);
        }
        next();
    }
    
    app.use(requireHTTPS);
    

    req.secure is just a shorthand for checking the protocol of the incoming request. However, in Azure websites your server is fronted by a web server running ARR, which does the SSL termination. This means the request you see will always be HTTP. The above code samples creates a redirect loop as the middleware continues to redirect to HTTPS, as it keeps seeing HTTP.

    Fortunately ARR adds some additional headers to the request. These are:

    cookie: ARRAffinity=71111a2979a8796b0cc6860d7f98cb7519aaea861cc96f15896edbb25a259064
    host: XYZ.azurewebsites.net
    max-forwards: 10
    x-liveupgrade: 1
    x-arr-log-id: 7cffa211-21f6-42c2-8b7d-e9eacac8adc8
    disguised-host: XYZ.azurewebsites.net
    x-site-deployment-id: XYZ
    x-original-url: /url
    x-forwarded-for: 212.69.41.154:4219
    x-arr-ssl: 2048|128|DC=com, DC=microsoft, DC=corp, DC=redmond, CN=MSIT Machine Auth CA 2|C=US, S=WA, L=Redmond, O=Microsoft, OU=OrganizationName, CN=*.azurewebsites.net 
    

    The x-arr-ssl header is only added for HTTPS requests, so we can use this header to determine whether it’s a HTTP or HTTPS request.

    function requireHTTPS(req, res, next) {
        if (!req.get('x-arr-ssl')) {
            return res.redirect('https://' + req.get('host') + req.url);
        }
        next();
    }
    
    app.use(requireHTTPS);
    

    You could go one better, and only do the upgrade if you’re running in Azure, which means you can run your site in plain HTTP in your local environment:

    function requireHTTPS(req, res, next) {
        if (req.get('x-site-deployment-id') && !req.get('x-arr-ssl')) {
            return res.redirect('https://' + req.get('host') + req.url);
        }
        next();
    }
    
    app.use(requireHTTPS);
    
     
    • stpdev 4:07 pm on September 23, 2015 Permalink | Log in to Reply

      Azure has it’s own method of handling HTTPS redirection. NodeJS apps in Azure are still manage through IIS runtimes.

      Their recommended approach is to add rewrite rules to the web.config.

      The web.config is only created after an initial deployment (easiest way to get it generated for all the node handling). You grab it via FTP credentials set up through the portal and simply add the rewrite rule to the existing rules in place.

      Once its there and you include it in the root dir of your deployed app, azure will use it.

  • Richard 1:39 pm on September 2, 2014 Permalink |  

    Using the Azure Search service from JavaScript 

    surface-your-data

    The new Azure Search service is a nice ‘search as a service’ offering from Microsoft. Just add your documents, and then run some queries. It’s exposed as a REST API which talks JSON :¬D

    It has a free tier, limited to 3 indexes and 10,000 documents, but you can of course start paying the money and index more stuff.

    I just wrote a JavaScript client (I couldn’t find a Microsoft one) to connect to the API, let’s explore how we can use it.

    Note this article assumes you are familiar with writing JavaScript for node.js and the browser.

    Creating the Search Service

    First, open up the new Azure Portal and go to NEW -> Search and enter some details in.

    You’ll need to switch to the free tier, otherwise you’ll start clocking up a bill.

    We’ll start off using Node.js, as only a few of the features (quite correctly) can be used from the browser.

    Once it’s created go to the properties and keys sections of the search service blade, and make a note of your url and an admin key.

    Installation

    First install the package:

    $ npm install azure-search
    

    Creating the Client

    Now let’s write write some JavaScript in node to create an index.

    var AzureSearch = require('azure-search');
    var client = AzureSearch({
        url: "https://MY_SEARCH_SERVICE_NAME.search.windows.net",
        key:"MY_SEARCH_SERVICE_KEY"
    });
    

    Creating the Index

    Now we have a client, we can create an index in the Search Service. To do this, we need to create a schema, which will tell the service what kind of data we want to store and search. There’s more information about how to create the schema in the Microsoft documentation, but for a simple example, I’ll have some text, and a key that I’ll use to refer to the text.

    var schema = { 
      name: 'myindex',
      fields:
       [ { name: 'id',
           type: 'Edm.String',
           searchable: false,
           filterable: true,
           retrievable: true,
           sortable: true,
           facetable: true,
           suggestions: false,
           key: true },
         { name: 'description',
           type: 'Edm.String',
           searchable: true,
           filterable: false,
           retrievable: true,
           sortable: false,
           facetable: false,
           suggestions: true,
           key: false } ],
      scoringProfiles: [],
      defaultScoringProfile: null,
      corsOptions:  { allowedOrigins: ["*"]} };
    
    client.createIndex(schema, function(err, schema){
      if (err) console.log(err);
      // schema created
    });
    

    Note that at the bottom of the file there’s a corsOptions setting which sets allowedOrigins to ["*"]. We’ll be using this later to access the index from the browser.

    Populating the Index

    Now we’re ready to start adding documents to the index. In the schema we specified id and description fields. So we just need to supply an object with these fields.

    var document = {
      id: "document1",
      description: "this is a document with a description"
    };
    client.addDocuments('myindex', [document], function(err, confirmation){
      if (err) console.log(err);
      // document added
    });
    

    In fact we can send through a batch of document through at once, by adding more items to the array.

    Querying the Index

    We can query the index from node, but the Search Service supports CORS, which allows us to query directly from the browser without having to stand up any of our own server-side infrastructure. This is where the CORS settings came in when we created the schema.

    One thing to be careful of; don’t put your Search Service admin key in a public web page. Instead, go back to the portal and use a query key (see the manage keys button when you’re looking at the keys).

    Untitled

    Now we can create a web page where we can call the index, and pass in some search queries. To do this we need to add a reference to azure-index.min.js (or use browserify, and just require ‘azure-index’).

    <html>
        <head>
            <script src="azure-search.min.js"></script>
        </head>
        <body>
            <script>
    
            var client = AzureSearch({
              url: "https://MY_SEARCH_SERVICE_NAME.search.windows.net",
              key:"MY_QUERY_KEY"
            });
    
            client.search('myindex', {search:'document'}, function(err, results){
              // results is an array of matching documents
            });
    
            </script>
        </body>
    </html>
    

    Note that from the browser, only search, suggest, lookup and count will work.

    For more information, please consult the Microsoft documentation and the documentation for the module.

    Conclusion

    The search service looks quite powerful. We’ve only scratched the surface of the options here. I’m keen to combine the search service with a trace listener, so you can index your application’s logging.

    It’s great to see Microsoft move away from the awkward API conventions used for the storage system, which included complicated header signing and XML. This JSON approach with a simple API key as a header is nice and simple.

    It’s also great to see CORS support our of the box, which makes it easy to consume this service directly from the browser.

    Personally I think API version number looks out of place on the URL, and would be better as a header, but maybe that’s just me.

    I also would prefer not to have to specify my schema. I’d like to just throw JSON objects at the service, and then be able to query on any of the fields, but I guess that’s DocumentDB!

     
  • Richard 10:37 am on August 29, 2014 Permalink |  

    Overrides in Orleans Configuration 

    If you look at the standard OrleansConfiguration.xml configuration file you get with Orleans, it looks something like this:

    <?xml version="1.0" encoding="utf-8"?>
    <OrleansConfiguration xmlns="urn:orleans">
      <Globals>
        <StorageProviders>
          <Provider Type="Orleans.Storage.MemoryStorage" Name="MemoryStore" />
        </StorageProviders>
        <SeedNode Address="localhost" Port="11111" />
      </Globals>
      <Defaults>
        <Networking Address="localhost" Port="11111" />
        <ProxyingGateway Address="localhost" Port="30000" />
        <Tracing DefaultTraceLevel="Info" TraceToConsole="true" TraceToFile="{0}-{1}.log">
          <TraceLevelOverride LogPrefix="Application" TraceLevel="Info" />
        </Tracing>
        <Statistics MetricsTableWriteInterval="30s" PerfCounterWriteInterval="30s" LogWriteInterval="300s" WriteLogStatisticsToTable="true"/>
      </Defaults>
      <Override Node="Primary">
        <Networking Address="localhost" Port="11111" />
        <ProxyingGateway Address="localhost" Port="30000" />
      </Override>
    </OrleansConfiguration>
    

    This configuration starts a silo listening on port 30000 (ProxyingGateway), and other silos can talk to it on port 11111 (Networking). It also identifies a SeedNode, which is the primary silo in the cluster. The seed node address and port match those specified for this silo, which makes this the primary.

    You’ll also notice the Override section at the bottom, which specifies the same ProxyingGateway and Networking ports again.

    In this case, the Override section doesn’t do much, but suppose we want to start a secondary silo? We can specify the ports for a secondary node to use here, like this (in this case a silo on the same machine, so on different port numbers):

    <?xml version="1.0" encoding="utf-8"?>
    <OrleansConfiguration xmlns="urn:orleans">
      <Globals>
      ... 
      </Globals>
      <Defaults>
      ...
      </Defaults>
      <Override Node="Primary">
        <Networking Address="localhost" Port="11111" />
        <ProxyingGateway Address="localhost" Port="30000" />
      </Override>
      <Override Node="Secondary">
        <Networking Address="localhost" Port="11112" />
        <ProxyingGateway Address="localhost" Port="30002" />
      </Override>
    </OrleansConfiguration>
    

    This allows us to use the same configuration file for both a primary and a secondary node, as the other settings will be the same (they must all specify the same primary (seed node)).

    To make use of the override, when you start the silo, just specify the name as the first argument.

    $ OrleansHost.exe Primary
    $ OrleansHost.exe Secondary
    

    You can also specify the configuration file you want to use, in case you have several (the default is OrleansConfiguration.xml).

    $ OrleansHost.exe Primary MyConfiguration.xml
    

    Interestingly, when starting a number of silos it only seems necessary to tell a client about one of silos (a primary or secondary) it seems to figure out the cluster members automatically.

     
  • Richard 10:22 am on August 13, 2014 Permalink |  

    Public IP Addresses for Amazon 

    I previously blogged about the public IP address ranges for Azure.

    The public IP addresses ranges for Amazon has recently been updated. Amazon has over 8 million addresses, which is 8 times the number Azure has.

    Interestingly, these are not so evenly distributed:

    Untitled

    Raw data:

    DC Total public IP addresses
    US East (Northern Virginia) 3694592
    US West (Oregon) 1507328
    EU (Ireland) 1332224
    Asia Pacific (Tokyo) 739328
    US West (Northern California) 655360
    Asia Pacific (Sydney) 327680
    Asia Pacific (Singapore) 313344
    South America (Sao Paulo) 245760
    China (Beijing) 65536
    GovCloud 16384
    TOTAL 8897536

    I have also updated my tool to look up which DC a website is hosted in, to include the Amazon IP ranges.

     
  • Richard 11:12 am on August 7, 2014 Permalink |  

    Enabling SSL for Self Hosted Nancy 

    One of the things I like about Nancy is the ease of creating self hosted HTTP services.

    Here’s how to enable HTTPS for your service.

    First of all, create your SSL certificate:

    $ openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem
    $ openssl pkcs12 -export -in cert.pem -inkey key.pem -out mycert.pfx
    

    Then go to ‘Manage Computer Certificates’ in the start menu (I’m using Windows 8.1).

    Right click on ‘Personal’ and import the freshly created mycert.pfx.

    Untitled

    Now go to your newly installed certificate, and get the thumbprint:

    Untitled2

    Now you need to register the URL with Windows:

    $ netsh http add urlacl url=https://+:4443/ user=YOUR_USERNAME
    $ netsh http add sslcert ipport=0.0.0.0:4443 certhash=‎YOUR_THUMBPRINT_WITHOUT_SPACES appid={06aabebd-3a91-4b80-8a15-adfd3c8a0b14} 
    

    You need to substitute your username, and the certificate thumbprint you copied (without the spaces)

    The appid is just a random guid, feel free to create your own.

    Now start nancy on this Uri https://localhost:4443/.

    using (var host = new NancyHost(new Uri("https://localhost:4443/")))
    {
        host.Start();
        Console.ReadKey();
    }
    
     
    • mbharallmansitech 1:38 pm on December 16, 2014 Permalink | Log in to Reply

      How to check that HTTPS request working in Nancy Host or not?

    • Richard 1:50 pm on December 16, 2014 Permalink | Log in to Reply

      send it an HTTPS request from your browser?

    • Richard 2:12 pm on December 16, 2014 Permalink | Log in to Reply

      Sounds like it didn’t work.

      Try putting some logging in your app, and looking at the HTTP traffic in fiddler.

    • Manuraj 2:20 pm on December 16, 2014 Permalink | Log in to Reply

      Let me try once again..

    • Manuraj 2:47 pm on December 16, 2014 Permalink | Log in to Reply

      I created new certificate and then i register them successfully. Then I use browser to send https request and apply break point. But it was not working.

    • Manuraj 5:53 am on December 17, 2014 Permalink | Log in to Reply

      Getting error NET::ERR_CERT_AUTHORITY_INVALID

    • Manuraj 6:47 am on December 18, 2014 Permalink | Log in to Reply

      Getting error HTTP Error 503. The service is unavailable on browser. Can you guide me how i can get response using HTTPS.

    • Manuraj 2:14 pm on January 5, 2015 Permalink | Log in to Reply

      Hi Richard,

      I am able to create certificate and able to register it successfully.

      But when i restart computer, Then nancy started but url related to nancy not work.
      It give error
      “GET https://localhost:4444/t1 net::ERR_CONNECTION_RESET”.

      I try to find the solution. But got no solution.

    • Manuraj 2:58 pm on January 5, 2015 Permalink | Log in to Reply

      Every time i have to run
      netsh http add sslcert ipport=0.0.0.0:4443 certhash=‎YOUR_THUMBPRINT_WITHOUT_SPACES appid={06aabebd-3a91-4b80-8a15-adfd3c8a0b14}
      After that it work. But after restart again i have to use same command.

      Is there any solution for this.

    • afterhourscoding 2:28 pm on June 8, 2015 Permalink | Log in to Reply

      The following command failed for me:
      netsh http add sslcert ipport=0.0.0.0:4443 certhash=‎YOUR_THUMBPRINT_WITHOUT_SPACES appid={06aabebd-3a91-4b80-8a15-adfd3c8a0b14}

      With error:
      SSL Certificate add failed, Error 1312
      A specified logon session does not exist. It may already have been terminated.

      I had to import the certificate using MMC.exe’s certificate for local computer module instead of using certmgr.msc. You can watch the certificate listed using ‘certutil -store My’

    • channygold 10:42 am on July 30, 2015 Permalink | Log in to Reply

      Thanks for this post. Really helpful!

    • Divyanshu Vyas 10:07 am on May 8, 2019 Permalink | Log in to Reply

      This will work if you register your URL as below :
      netsh http add urlacl url=https://YOURIPADDRESS:4443/ user=YOUR_USERNAME

      instead of
      netsh http add urlacl url=https://+:4443/ user=YOUR_USERNAME

  • Richard 9:00 am on August 4, 2014 Permalink |  

    Public IP Addresses for Microsoft Azure 

    Microsoft publish a list of public IP addresses for the compute services hosted in their Azure Datacenters. When they say ‘compute’ they also include SQL Database, and the storage accounts seem to be included too.

    I thought it would be interesting to write a script to count the total number of IP addresses available for each DC.

    Here’s a pretty graph.

    Untitled

    Whilst the number of public IPs is no way of working out the number of machines provisioned in each DC, it does at least give you an idea of the magnitude of each of the regions.

    Some interesting things to note:

    1. The EU data centers are similar sizes, Dublin being slightly bigger.
    2. The 5 DCs in Asia, Brazil and Japan added together are smaller than US West.
    3. The total number of IPs is just over 1 million.
    4. There seems to be a step change between the largest 6, the next 4, and then the 3 small (and new) DCs in Japan and Brazil.
    5. US West is 28 times larger than Brazil South.
    6. Almost 60% of the IPs are in the USA.

    Here’s my processed data in table:

    DC Total public IP addresses
    uswest 179504
    useast 155808
    europenorth 150400
    europewest 144320
    usnorth 118112
    ussouth 109120
    asiaeast 62384
    uscentral 62336
    asiasoutheast 57440
    useast2 46832
    japanwest 18912
    japaneast 10976
    brazilsouth 6288
    TOTAL 1122432

    I think it will be really interesting to keep an eye on the IP ranges, and see how they change over time (or is it just me?).

    I also published a tool to look up if an IP (or domain name) is hosted in Azure, it also tells you which DC it’s in.

     
  • Richard 10:40 am on July 30, 2014 Permalink |  

    Improving Orleans Silo Startup Times 

    The startup times for Orleans Silos seems a bit variable. I’ve tried a few things out, and got the startup time down when using the Dev/Test Silo. I don’t know if all of these make a difference, but they might be worth trying if you’re as impatient as me.

    • Remove any Storage Providers you’re not using from the configuration.
    • Turn off logging (unless things aren’t working!) by setting TraceToConsole="false" and TraceToFile="false" in the Tracing element.
    • If you’re not putting any client code in the Dev/Test Silo (perhaps you’re connecting another application such as an ASP.NET) then remove the Orleans.OrleansClient.Initialize("DevTestClientConfiguration.xml"); line from Main.cs.
    • Remove any Grain Collection classes you’re not using from the Orleans SDK\LocalSilo\Applications directory.
    • Delete the Orleans.FSharp.dll from SDK\LocalSilo.

    I’ve got startup time down to under 6 seconds.

    My DevTestServerConfiguration.xml file looks like this:

    <?xml version="1.0" encoding="utf-8"?>
    <OrleansConfiguration xmlns="urn:orleans">
      <Globals>
        <StorageProviders>
          <Provider Type="Orleans.Storage.MemoryStorage" Name="AzureStore" />
        </StorageProviders>
        <SeedNode Address="localhost" Port="11111" />
      </Globals>
      <Defaults>
        <Networking Address="localhost" Port="11111" />
        <ProxyingGateway Address="localhost" Port="30000" />
        <Tracing DefaultTraceLevel="Error" TraceToConsole="false" TraceToFile="false"/>
        <Statistics WriteLogStatisticsToTable="false"/>
      </Defaults>
      <Override Node="Primary">
        <Networking Address="localhost" Port="11111" />
        <ProxyingGateway Address="localhost" Port="30000" />
      </Override>
    </OrleansConfiguration>
    

    My DevTestClientConfiguration.xml file looks like this:

    <?xml version="1.0" encoding="utf-8" ?>
    <ClientConfiguration xmlns="urn:orleans">
      <GatewayProvider ProviderType="Config"/>
      <Gateway Address="localhost" Port="30000"/>
      <Tracing DefaultTraceLevel="Info" TraceToConsole="false" TraceToFile="false"/>
      <Statistics WriteLogStatisticsToTable="false"/>
    </ClientConfiguration>
    
     
  • Richard 2:20 pm on July 23, 2014 Permalink |  

    Using Observers for Pub Sub in Project Orleans 

    Orleans has a cool feature for pub-sub messaging. This allows you to register for grain originated events without you having to poll a grain. Instead you can asked to be called back when the interesting thing happens.

    To do this you create an ‘observer’ object. You then create a reference to this object, and pass it to the grain. The grain maintains a list of all observers. The grain can then send them all a message with a single call.

    To create an observer, create an interface which inherits from IGrainObserver. You should put this in your Grain Interfaces project.

    public interface IFooGrainObserver : IGrainObserver
    {
        void Foo(string message);
    }
    

    Note that the method signature for Foo returns void. This is a requirement for subscribers.

    Next create an implementation of this class. It doesn’t really matter which project you put this in. It’s probably best placed in the project where you want the code to actually run, so if you’re sending a message back to the client, put it alongside your clide code.

    class FooObserver : IFooGrainObserver
    {
        public void Foo(string message)
        {
            Console.WriteLine(message);
        }
    }
    

    We’ll need a grain which is going to receive the subscriptions, and perform the publication.

    This is the grain interface which has a method for registering a subscriber (Subscribe), and another to call when you wish to publish a message (Publish):

    public interface IPubSubGrain : IGrain
    {
        Task Subscribe(IFooGrainObserver observer);
        Task Publish(string message);
    }
    

    And this is the implementation:

    public class PubSubGrain : GrainBase, IPubSubGrain
    {
    
        ObserverSubscriptionManager<IFooGrainObserver> subscribers = new ObserverSubscriptionManager<IFooGrainObserver>();
    
        public Task Subscribe(IFooGrainObserver observer)
        {
            subscribers.Subscribe(observer);
            return TaskDone.Done;
        }
    
        public Task Publish(string message)
        {
            subscribers.Notify(x => x.Foo(message));
            return TaskDone.Done;
        }
    }
    

    Note that Orleans provides a ObserverGrainManager which helps you manage the subscriptions, and send notifications.

    Now to actually make this work.

    When the Grain Interfaces project compiles a factory is created for our subscriber (FooGrainObserverFactory) – just like the factories are created for the grains.

    To use the factory, we pass in an instance of our IFooGrainObserver interface (which will be a FooObserver). This will give us back an object we can then pass to the Subscribe method of our grain.

    This is the subscribe process complete. Now, just call publish.

    Your client code (perhaps this is in your Dev/Test Silo) will look something like this:

    var grain = PubSubGrainFactory.GetGrain(0);
    
    // register a subscriber.
    var observerRef = await FooGrainObserverFactory.CreateObjectReference(new FooObserver());
    await grain.Subscribe(observerRef);
    
    // send a publish
    await grain.Publish("Hello World");
    
    // Hello World is printed on the console by the instance of FooObserver
    

    Simple!

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel