The 10-Year Platform: Shutting Down KRE

Summary: The original pico engine, KRE, is no more. But the ideas and capabilities of the platform live on in the new pico engine.

A few years ago, I announced on this blog that Kynetx was done. But the platform we'd created, the Kynetx Rules Engine, or KRE, lived on. Today I am annoucing that KRE is dead too. We shut it down last week.

Despite the demise of Kynetx, the platform continued to be open and available. Fuse was still running on it and my students were using it for class and research. But Fuse stopped working for good last spring when the MVNO we were using to process cellular data from the car devices shut down. And the new pico engine is working so well that we use it for everything now.

KRE was started in 2007 and envisioned as a cloud-based programming platform for events. While we Continue reading "The 10-Year Platform: Shutting Down KRE"

The 10-Year Platform: Shutting Down KRE

Summary: The original pico engine, KRE, is no more. But the ideas and capabilities of the platform live on in the new pico engine.

A few years ago, I announced on this blog that Kynetx was done. But the platform we'd created, the Kynetx Rules Engine, or KRE, lived on. Today I am annoucing that KRE is dead too. We shut it down last week.

Despite the demise of Kynetx, the platform continued to be open and available. Fuse was still running on it and my students were using it for class and research. But Fuse stopped working for good last spring when the MVNO we were using to process cellular data from the car devices shut down. And the new pico engine is working so well that we use it for everything now.

KRE was started in 2007 and envisioned as a cloud-based programming platform for events. While we Continue reading "The 10-Year Platform: Shutting Down KRE"

Is Sovrin Decentralized?

Summary: To determine whether Sovrin is decentralized, we have to ask questions about the purpose of decentralization and how Sovrin supports those purposes.

People sometimes ask "Is Sovrin decentralized?" given that it relies on a permissioned ledger. Of course, the question is raised in an attempt to determine whether or not an identity system based on a permissioned ledger can make a legitimate claim that it's self-sovereign. But whether or not a specific system is decentralized is just shorthand for the real questions. To answer the legitimacy question, we have to examine the reasons for decentralization and whether or not the system in question adequately addresses those reasons.

This excellent article from Vitalik Buterin discusses the meaning of decentralization. Vitalik gives a great breakdown of different types of decentralization, listing architectural decentralization, political decentralization, and logical decentralization.

Of these, logically decentralized systems are the most rare. Bitcoin and other Continue reading "Is Sovrin Decentralized?"

Equifax and Correlatable Identifiers

Summary: We can avoid security breachs that result in the loss of huge amounts of private data by creating systems that don't rely on correlatable identifiers. Sovrin is built to use non-correlatable identifiers by default while still providing all the necessary functionality we expect from an identity system.

Yesterday word broke that Equifax had suffered a data breach that resulted in 143 million identities being stolen. This is a huge deal, but not really too shocking given the rash of data breaches that have filled the news in recent years.

The typical response when we hear about these security problems is "why was their security so bad?" While I don't know any specifics about Equifax's security, it's likely that their security was pretty good. But the breach still occurred. Why? Because of Sutton's Law. When Willie Sutton was asked why he robbed banks, he reputedly said "cause that's where Continue reading "Equifax and Correlatable Identifiers"

Sovrin Self-Sustainability

Summary: For Sovrin to become a global, public utility that helps everyone create and manage self-sovereign identities, it must be independent and self-sustaining. This post outlines four idependence milestopnes for Sovrin Foundation.

The Sovrin Foundation began life about a year ago. We launched the Sovrin Network just last month. For Sovrin to achieve its goal of providing self-sovereign identity for all, the Foundation and the Network have to be independent and self-sustaining.

The idea for Sovrin-style identity and the technology behind it was developed by Evernym. To their credit, Evernym’s founders, Jason Law and Timothy Ruff, recognized that for their dream of a global identity system to become reality, they’d have to make Sovrin independent of Evernym. At present, Evernym continues to make huge contributions to Sovrin in time, code, money, and people. Our goal is to reduce these contributions, at least as a percentage of the total, over time.

Continue reading "Sovrin Self-Sustainability"

The Case for Decentralized Identity

Summary: We cannot decentralize many interesting systems without also decentralizing the identity systems upon which they rely. We're finally in a position to create truly decentralized systems for digital identity.

I go back and forth between thinking decentralization is inevitable and thinking it's just too hard. Lately, I'm optimistic because I think there's a good answer for one of the sticking points in building decentralized systems: decentralized identity.

Most interesting systems have an identity component. As Joe Andrieu says, "Identity is how we keep track of people and things and, in turn, how they keep track of us." The identity component is responsible for managing the identifiers and attributes that the system needs to function, authenticating the party making a request, and determining whether that party is authorized to make the request. But building an identity system that is usable, secure, maximizes privacy is difficult—much harder than most Continue reading "The Case for Decentralized Identity"

Launching the Sovrin Network

Summary: The Sovrin network for identity is now live and accepting transactions. Sovrin provides a global identity infrastructure that supports self-sovereign identity and verifiable claims. This blog post describes the launch ceremony that we conducted. This is the beginning of Identity for All.

This morning I participated in the launch of the Sovrin Network. About six weeks ago, we set up the Alpha network for testing. Validators participated in exercises to ensure the network was stable and could achieve consensus under a variety of circumstances.

This morning we transitioned from the Alpha network to the Provisional network. There are several important differences between the Alpha network and the Provisional network:

Identity, Sovrin, and the Internet of Things

Summary: Building the Internet of Things securely requires that we look to non-hierarchical models for managing trust. Sovrin provides a Web of Trust model for securing the Internet of Things that increases security and availability while giving device owners more control.
<a href="https://blogs.harvard.edu/doc/">Doc Searls</a> put me onto this report from Cable Labs: <a href="http://www.cablelabs.com/vision-secure-iot/">A Vision for Secure IoT</a>. Not bad stuff as far as it goes. The executive summary states:
IoT therefore represents the next major axis of growth for the Internet. But, without a significant change in how the IoT industry approaches security, this explosion of devices increases the risk to consumers and the Internet. To reduce these risks, the IoT industry and the broader Internet ecosystem must work together to mitigate the risks of insecure devices and ensure future devices are more secure by developing and adopting robust security standards for IoT devices. Industry-led standards represent the most promising approach Continue reading "Identity, Sovrin, and the Internet of Things"

A Mesh for Picos

Summary: This post describes some changes we're making to the pico engine to better support a decentralized mesh for running picos.

Picos are Internet-first actors that are well suited for use in building decentralized soutions on the Internet of Things. Here are a few resources for exploring the idea of picos and our ideas about they enable a decentralized IoT if you’re unfamiliar with the idea:

  • Picos: Persistent Compute Objects—This brief introduction to picos and the components that make up the pico ecosystem is designed to make clear the high-level concepts necessary for understanding picos and how they are programmed. Over the last year, we've been replacing KRE, the engine picos run on, with a new, Node-based engine that is smaller and more flexible.
  • Reactive Programming with Picos—This is an introduction to picos as a method for doing reactive programming. The article contains many links to other, more Continue reading "A Mesh for Picos"

Sovrin Status: Alpha Network Is Live

Summary: The Sovrin Network is live and undergoing testing. This Alpha Stage will allow us to ensure the network is stable and the distributed nodes function as planned. Sunrise
Sovrin is based on a permissioned distributed ledger. Permissioned means that there are known validators that achieve consensus on the distributed ledger. The validators are configured so as to achieve <a href="https://en.wikipedia.org/wiki/Byzantine_fault_tolerance">Byzantine fault tolerance</a> but because they are known, the network doesn't have to deal with <a href="https://en.wikipedia.org/wiki/Sybil_attack">Sybil attacks</a>. This has several implications:
  1. The nodes are individually unable to commit transactions, but collectively they work together to create a single record of truth. Individual nodes are run by organizations called "Sovrin Stewards."
  2. Someone or something has to chose and govern the Stewards. In the case of Sovrin, that is the Sovrin Foundation. The nodes are governed according to the Sovrin Trust Framework.
The Sovrin Network has launched in alpha. The purpose of the <!--more--> Network is to allow Founding Stewards to do everything necessary to install and test their validator nodes before we collectively launch the Provisional Network. It’s our chance to do a dry-run to work out any kinks that we may find before the initial launch. 

Here’s what we want to accomplish as part of this test run:
  • Verify technical readiness of the validator nodes
  • Verify security protocols and procedures for the network
  • Test emergency response protocols and procedures
  • Test the distributed, coordinated upgrade of the network
  • Get some experience running the network as a community
  • Work out any kinks and bugs we may find.
With these steps complete, Sovrin will become a technical reality. It’s an exciting step. We currently have nine stewards running validators nodes and expect more to come online over the next few weeks. Because the Alpha Network is for conducting tests, we anticipate that the genesis blocks on the ledger will be reset once the testing is complete. 



Once the Alpha Network has achieved it's goals, it will transition to the Provisional Network. The Sovrin Technical Governance Board (TGB) chose to operate the network in a provisional stage as a beta period where all transactions were real and permanent, but still operating under a limited load. This will enable the development team and Founding Stewards to do performance, load, and security testing against a live network before the Board of Trustees declares it generally availabile.
After many months of planning and working for the network to go live, we're finally on our way. Congratulations and gratitude to the team at Evernym doing the heavy lifting, the Founding Stewards who are leading the way, and the many volunteers who sacrifice their time to build a new reality for online identity.
Photo Credit: Sunrise from dannymoore1973 (CC0 Public Domain) Tags:

Updated Pico Programming Workflow

Summary: This page introduces the tool chain in the programming workflow for picos. Pico Tool chain and programming workflow
I just got done updating the page in the Pico documentation that talks about the <a href="https://picolabs.atlassian.net/wiki/display/docs/Programming+Workflow">pico programming workflow</a>. I use the idea of a toolchain as an organizing principle. I think it turned out well. If you program picos, it might be of some help. 
Tags:

Sovrin Web of Trust

Summary: Sovrin uses a heterarchical, decentralized Web of Trust model to build trust in identifiers and give people clues about what and who to trust.

The Web of Trust model for Sovrin is still being developed, but differs markedly from the trust model used by the Web.

The Web (specifically TLS/SSL) depends on a hierarchical certificate authority model called the Public Key Infrastructure (PKI) to determine which certificates can be trusted. When your browser determines that the domain name of the site you're on is associated with the public key being used to encrypt HTTP transmissions (and maybe that they’re controlled by a specific organization), it uses a certificate it downloads from the Website itself. How then can this certificate be trusted? Because it was cryptographically signed by some other organization who issued the public key and presumably checked the credentials of the company buying the certificate for the domain.

Continue reading "Sovrin Web of Trust"

Sovrin In-Depth Technical Review

Summary: Sovrin Foundation has engaged Engage Identity to perform a security review of Sovrin's technology and processes. Results will be available later this summer.
The <a href="http://sovrin.org/">Sovrin Foundation</a> and <a href="http://engageidentity.com/">Engage Identity</a> announced a new partnership today. Security experts from Engage Identity will be completing an in-depth technical review of the Sovrin Foundation’s entire security architecture.



Sovrin Foundation is very concerned that the advanced technology utilized by everyone depending on Sovrin is secure. That technology protects many valuable assets including private personal information and essential business data. As a result, we wanted to be fully aware of the risks and vulnerabilities in Sovrin. In addition, The Sovrin Foundation will benefit from having a roadmap for future security investment opportunities.



We're very happy to be working with Engage Identity, a leader in the security and identity industry. Established and emerging cryptographic identity protocols are one of their many areas of expertise. They have <!--more--> experience providing security analysis and recommendations for identity frameworks.



The Engage Identity team is lead by Sarah Squire, who has worked on user-centric open standards for many organizations including NIST, Yubico, and the OpenID Foundation. Sarah will be joined by Adam Migus and Alan Viars, both experienced authorities in the fields of identity and security.



The final report will be released this summer, and will include a review of the current security architecture, as well as opportunities for future investment. We intende to make the results public. Anticipated subjects of in-depth research are:
  • Resilience to denial of service attacks
  • Key management
  • Potential impacts of a Sovrin-governed namespace
  • Minimum technical requirements for framework participants
  • Ongoing risk management processes
Sovrin Foundation is excited to take this important step forward with Engage Identity to ensure that the future of self-sovereign identity management can thrive and grow.
Tags:

Hyperledger Welcomes Project Indy

Summary: The Sovrin Foundation announced at the 24th Internet Identity Workshop (IIW) that its distributed ledger, custom-built for independent digital identity has been accepted into incubation under Hyperledger, the open source collaborative effort created to advance cross-industry blockchain technologies hosted by The Linux Foundation.
We’re excited to announce Indy, a new Hyperledger project for supporting independent identity on distributed ledgers. Indy provides tools, libraries, and reusable components for providing digital identities rooted on blockchains or other distributed ledgers so that they are interoperable across administrative domains, applications, and any other silo.

Why Indy?

Internet identity is broken. There are too many anti-patterns and too many privacy breaches. Too many legitimate business cases are poorly served by current solutions. Many have proposed distributed ledger technology as a solution, however building decentralized identity on top of distributed ledgers that were designed to support something else (cryptocurrency or smart contracts, for example) leads <div class="post-limited-image"><img style="margin-top: 10px;margin-bottom:10px;width:500px" src="http://www.windley.com/archives/2017/05/sovereign_identity_star.png" border="0" title="Sovereign identity star" alt="Sovereign identity star"  /></div>
Validation and Access
Continue reading "Hyperledger Welcomes Project Indy"

Pico Programming Lesson: Modules and External APIs

Summary: A new pico lesson is available that shows how to use user-defined actions in modules to wrap an API. Apollo Modules
I recently added a new lesson to the <a href="https://picolabs.atlassian.net/wiki/spaces/docs/pages/1185969/Pico+Programming+Lessons">Pico Programming Lessons</a> on <a href="https://picolabs.atlassian.net/wiki/spaces/docs/pages/32148114/Modules+and+External+APIs+Lesson">Modules and External APIs</a>. KRL (the pico programming language) has parameterized modules that are great for incorporating external APIs into a pico-based system. 



This lesson shows how to define actions that wrap API requests, put them in a module that can be used from other rulesets, and manage the API keys. The example (<a href="https://github.com/Picolab/pico_lessons/tree/master/modules_apis">code here</a>) uses the Twilio API to create a <code>send_sms()</code> action. Of course, you can also use functions to wrap API requests where appropriate (see the <a href="https://picolabs.atlassian.net/wiki/spaces/docs/pages/32148114/Modules+and+External+APIs+Lesson#ModulesandExternalAPIsLesson-ActionsandFunctions">Actions and Functions</a> section of the lesson for more detail on this). 



KRL includes a <code>key</code> pragma in the meta block for declaring keys. The recommended way to use it is to create a module just to <!--more--> keys. This has several advantages:
  • The API module (Twilio in this case) can be distributed and used without worrying about key exposure.
  • The API module can be used with different keys depending on who is using it and for what.
  • The keys module can be customized for a given purpose. A given use will likely include keys for multiple modules being used in a given system.
  • The pico engine can manage keys internally so the programmer doesn't have to worry (as much) about key security.
  • The key module can be loaded from a file or password-protected URL to avoid key loss.
The combination of built-in key management and parameterized modules is a powerful abstraction that makes it easy to build easy-to-use KRL SDKs for APIs.

Going Further

The pico lessons have all been recently updated to use the <a href="http://www.windley.com/archives/2017/03/the_new_pico_engine_is_ready_for_use.shtml">new pico engine</a>. If you're interested in learning about reactive programming and the actor model with picos, walk through the <a href="https://picolabs.atlassian.net/wiki/spaces/docs/pages/19791878/Pico+Engine+Quickstart">Quickstart</a> and then dive into the lessons.

Photo Credit: Blue Marble Geometry from Eric Hartwell (CC BY-NC-SA 3.0) Tags:

The New Pico Engine Is Ready for Use

Summary: The new pico engine is nearly feature complete and being used in a wide variety of settings. I'm declaring it ready for use. The mountains, the lake and the cloud
A little over a year ago, I announced that I was <a href="http://www.windley.com/archives/2016/03/rebuilding_krl.shtml">starting a project to rebuild the pico engine</a>. My goal was to improve performance, make it easier to install, and supporting small deployments while retaining the best features of picos, specifically being Internet first. 



Over the past year we've met that goal and I'm quite excited about where we're at. Matthew Wright and Bruce Conrad have reimplemented the pico engine in NodeJS. The new engine is easy to install and quite a bit faster than the old engine. We've already got most of the important features of picos. My students have redone large portions of our supporting code to run on the new engine. As a result, the new engine is sufficiently advanced that I'm <!--more--> it ready for use. 



We've updated the <a href="https://picolabs.atlassian.net/wiki/spaces/docs/pages/19791878/Pico+Engine+Quickstart" >Quickstart</a> and <a href="https://picolabs.atlassian.net/wiki/spaces/docs/pages/1185969/Pico+Programming+Lessons">Pico Programming Lessons</a> to use the new engine. I'm also adding new lessons to help programmers understand the most important features of Picos and KRL.



My Large-Scale Distributed Systems class (CS462) is using the new pico engine for their reactive programming assignments this semester. I've got 40 students going through the pico programming lessons as well as reactive programming labs from the course. The new engine is holding up well. I'm planning to expand it's use in the course this spring. 



Adam Burdett has redone the <a href="http://www.windley.com/archives/2016/07/pico_labs_at_open_west.shtml" >closet demo we showed at OpenWest</a> last summer using the new engine running on a Raspberry Pi. One of the things I didn't like about using the classic pico engine in this scenario was that it made the solution overly reliant on a cloud-based system (the pico engine) and consequently was not robust under network partitioning. If the goal is to keep my machines cool, I don't want them overheating because my network was down. Now the closet controller can run locally with minimal reliance on the wider Internet.



Bruce was able to use the new engine on a <a href="http://www.windley.com/archives/2017/01/using_picos_for_byus_priority_registration.shtml">proof of concept for BYU's priority registration</a>. This was a demonstration of the ability for the engine to scale and handle thousands of picos. The engine running on a laptop was able to handle 44,504 add/drop events in over 8000 separate picos in 35 minutes and 19 seconds. The throughput was 21 registration events per second or 47.6 milliseconds per request.



We've had several lunch and learn sessions with developers inside and outside BYU to introduce the new engine and get feedback. I'm quite pleased with the reception and interactions we've had. I'm looking to expand those now that the lessons are completed and we have had several dozen people work them. If you're interested in attending one, let me know. 

Up Next

I've hired two new students, Joshua Olson and Connor Grimm, to join Adam Burdett and Nick Angell in my lab. We're planning to spend the summer getting Manifold, our pico-based Internet of Things platform, running on the new engine. This will provide a good opportunity to improve the new pico engine and give us a good IoT system for future experiments, supporting our idea around <a href="http://www.windley.com/archives/2015/07/social_things_trustworthy_spaces_and_the_internet_of_things.shtml" >social things</a>.



I'm also contemplating a course on reactive programming with picos on Udemy or something like it. This would be much more focused on practical aspects of reactive programming than my BYU distributed system class. <a href="http://www.windley.com/archives/2015/11/reactive_programming_with_picos.shtml" >Picos are a great way to do reactive programming</a> because they implement an actor model. That's one reason they work so well for the Internet of Things.

Going Further

If you'd like to explore the pico engine and reactive programming with picos, you can start <a href="https://picolabs.atlassian.net/wiki/spaces/docs/pages/19791878/Pico+Engine+Quickstart"> with the Quickstart</a> and move on to the <a href="https://picolabs.atlassian.net/wiki/spaces/docs/pages/1185969/Pico+Programming+Lessons" >pico programming lessons</a>.



We'd also love help with the open source implementation of the pico engine. The <a href="https://github.com/Picolab/node-pico-engine">code is on Github</a> and there's well-maintained set of <a href="https://github.com/Picolab/pico-engine/issues">issues that need to be worked</a>. Bruce is the coordinator of these efforts.



Any questions on picos or using them can be directed to the <a href="http://forum.picolabs.io/">Pico Labs forum</a> and there's a pretty good set of <a href="https://picolabs.atlassian.net/">documentation</a>.

Photo Credit: The mountains, the lake and the cloud from CameliaTWU (CC BY-NC-ND 2.0) Tags: