A concept first proposed by cryptologist Irving John Good in 1965, the technological singularity refers to the advent of an ‘ultraintelligent machine that can far surpass all the intellectual activities of any man’.
The idea follows that such an advanced AI machine could design even better machines thus leading to an ‘intelligence explosion’ and leaving man behind in intellectual capability. Good proposes that this would be the last invention man need ever make.
Perhaps this is overstating things - it is impossible to know what the future has in store for us - but, as AI development becomes increasingly advanced, only the most phobic would deny feeling that we are on the cusp of making some incredible technological strides.
Data is the life force and fuel of an AI system; they need it to learn, reference, and the more that is fed into an AI algorithm, the more exponential this growth is.
The problem with creating a super-advanced AI is that not only is there is currently no structure capable of processing such large datasets, but the vital act of preprocessing such vast quantities of data can be difficult to do.
So, for AI neural networks, the preprocessing technique of dimension reduction is essential for large-scale data classification tasks—it can be sometimes tricky, but serves a useful function in reducing the variables likely to confuse an AI.
An example of the importance of dimension reduction in a neural net can be found on your smartphone: if you were to ask Siri, “What can I do for my cold?”, it will load a Wikipedia page explaining ‘cold’ in the sense of temperature.
There are several variations for cold as an adjective and multiple in the sense of a noun and this is why, for AI, ontologies are a critical component of constructing a broader and better-informed context.
If we’re moving towards an ultraintelligent AI then manually creating ontologies simply will not cut it for the immeasurable scale needed to reach this point.
Ontology learning is in the automatic discovery and creation of ontological knowledge through machine learning techniques; this is far more efficient than manually creating ontologies and could also help with standardizing the language of ontologies.
Such greater use of machine learning makes the entire process automatic and AI-driven, as well as potentially programming language agnostic—this would massively benefit worldwide collaboration.
A Directed Acyclic Graph (DAG) architecture on CyberVein is perfectly suited to storing neural networks; as this is a model of AI with a graph structure, it is ideally stored in a graph database and the same applies to the data that it generates.
Ontologies similarly rely on the structured storage of data, such as in multi-agent systems. A simple example would be robots in a fully automated warehouse who need to exchange information to maintain a common overview of the state of the environment.
This is straightforward to achieve in one micro-environment such as in a warehouse. But if this is to be extended to a more complex web of interacting environments - like, for example, in an AI model aiding the global provision of health services in developing regions - then the widespread exchange of ontology models and the data they are trained with can become far more complex.
CyberVein can help developers to easily distribute ontologies into other, relevant environment where they would in theory become more advanced in less time. Being able to share these ontology models and data they are trained with would be massively advantageous.
Such vast data sets are difficult enough to process, let alone distribute on a local or international scale, and this is what the CyberVein network offers with a distributed series of databases on a DAG.
Progress beyond this could be explosive: if the development of AIs around the world can begin to see greater collaboration on machine learning advances, it follows that we will move irresistibly closer to Mr. Good’s ultraintelligent machine.
Analytics in ‘really, really big data’ would be made possible by such an AI while the processing and storing of this unimaginable quantity of data comes inbuilt on the CyberVein network: a beautifully simple graph structure of decentralized databases on user device storage.
This is a relatively straightforward concept that holds massive potential. It’s a part of the CyberVein vision: by making it possible to store and exchange data and AI models, machine learning might start to progress exponentially, at a rate that we have never seen before and in a way not even thought possible.
If you’re interested in learning more about the CyberVein network, be sure to join the discussion on Telegram and check us out on Facebook and Twitter. For photo updates please also give us a follow on Instagram!
Thank you for your continued support!
The CyberVein Team