The project seems to be an initiative that is conceptually aligned with the goals of the Semantic Web although almost certainly the specific approach and implementation is entirely different.
But what about all the actual knowledge that we as humans have accumulated?
A lot of it is now on the web—in billions of pages of text. And with search engines, we can very efficiently search for specific terms and phrases in that text.
But we can’t compute from that. And in effect, we can only answer questions that have been literally asked before. We can look things up, but we can’t figure anything new out.
So how can we deal with that? Well, some people have thought the way forward must be to somehow automatically understand the natural language that exists on the web. Perhaps getting the web semantically tagged to make that easier.
But armed with Mathematica and NKS I realized there’s another way: explicitly implement methods and models, as algorithms, and explicitly curate all data so that it is immediately computable.
When A New Kind of Science was published in 2002 I spent my $45 and purchased the book.
In reading the book (BTW – I did read all of the 846 pages of the core text) over the subsequent months a number of the core premises of the book resonated with me including:
- The universe is ultimately discrete.
- Complex behavior can arise/emerge from simple systems.
- The algorithm can be a powerful modeling tool.
- Simple algorithms might offer an explanation of the origin of the randomness that we see in the universe.
It will be very interesting to see what happens in May when the service is launched.
I signed up for the mailing list and applied for early access but am not expecting that I’ll have the privilege of being selected.