Skip to content
Rob Speer edited this page Jun 12, 2018 · 24 revisions

Here are answers to some frequently-asked questions, updated for ConceptNet 5.5.

The basics

What is ConceptNet?

ConceptNet is a knowledge graph of things people know and computers should know, expressed in various natural languages. See the main page for more details.

Is ConceptNet an AI? Can I talk to it?

ConceptNet is a resource. You can use it as part of making an AI that understands the meanings of words people use.

ConceptNet is not a chatbot. Some chatbot systems have used ConceptNet as a resource, but this is not a primary use case that ConceptNet is designed for.

How can I see what ConceptNet knows?

You can browse the knowledge graph at http://www.conceptnet.io/.

How do I use ConceptNet in my own code?

We recommend starting with the Web API. If you need a greater flow of information than the Web API provides, then consider downloading the data.

One way to take advantage of all the information in ConceptNet, as well as information that can be learned from large corpora of text, is to use the ConceptNet Numberbatch word embeddings. These can be used as a more accurate replacement for word2vec or GloVe vectors.

When used together with some extra code in conceptnet5.vectors, ConceptNet Numberbatch provides the best word embeddings in the world in multiple languages, as tested at SemEval 2017.

Using the API

How do I get started using the ConceptNet API?

We went to some effort to make the API responses look nice in a Web browser. The JSON gets formatted and highlighted, and values that are references to other URLs you can look up become links, so you can just explore by following these links.

Try clicking the link below and you'll be using the ConceptNet API:

http://api.conceptnet.io/c/en/example

Of course you don't have to be a Web browser. If you have curl (a small command-line HTTP utility) on your computer, try running this at the command line:

curl http://api.conceptnet.io/c/en/example

Or in Python, using the requests library:

import requests
requests.get('http://api.conceptnet.io/c/en/example').json()

There are more things you can do that won't be quite so obvious just from looking at the responses, so once you've explored a little, go read the API documentation.

The API returns fewer results than I saw on the Web interface. Where are the rest of the results?

There are more pages of results. The default page size is set to 20 -- this speeds up the responses, and makes sure you notice that there aren't many results.

When the API results are paginated, the response will end with a section that looks like this:

  "view": {
    "@id": "/c/en/example?offset=0&limit=20",
    "@type": "PartialCollectionView",
    "comment": "There are more results. Follow the 'nextPage' link for more.",
    "firstPage": "/c/en/example?offset=0&limit=20",
    "nextPage": "/c/en/example?offset=20&limit=20",
    "paginatedProperty": "edges"
  }

As the comment states, "nextPage" contains a link to the next page of results. If you're viewing the API response in a Web browser, you can click the link to see more results.

I queried the API and got a bunch of HTML formatting. How do I just get the JSON?

We were trying to only send you the formatted HTML if it looked like you were using a Web browser, but maybe we're wrong, and maybe you just want the plain JSON anyway. Add ?format=json to the URL that you query. For example:

http://api.conceptnet.io/c/en/example?format=json

Try going to that URL in Firefox, which has its own built-in JSON formatter. It won't give you a way to follow the links, but other than that, it's pretty nice.

What format are these API responses in?

JSON-LD, a linked data format that on the surface is just reasonable-looking JSON, and under the hood, preserves some of the good parts of RDF and the Semantic Web.

Comparisons to other projects

How does ConceptNet compare to WordNet?

This is an interesting comparison to make, as the projects have similar goals, and by now they both make use of multilingual linked data.

ConceptNet contains more kinds of relationships than WordNet. ConceptNet's vocabulary is larger and interconnected in many more ways. In exchange, it's somewhat messier than WordNet.

ConceptNet does only the bare minimum to distinguish word senses so far -- in the built graph of ConceptNet 5.5, word senses are only distinguished by their part of speech (similar to sense2vec). WordNet has a large number of senses for every word, though some of them are difficult to distinguish in practice.

WordNet is too sparse for some applications. You can't build word vectors from WordNet alone. You can't compare nouns to verbs in WordNet, because they are mostly unconnected vocabularies.

ConceptNet does not assume that words fall into "synsets", sets of synonyms that are completely interchangeable. Synonymy in ConceptNet is a relation like any other. If you've worked with WordNet, you may have been frustrated by the implications of the synset assumption on real text, where words are not marked with specific senses, and where the word "He" cannot usually be replaced synonymously with "atomic number 2".

In ConceptNet, we incorporate as much of WordNet as we can while undoing the synset assumption, and we give it a high weight, because the information in WordNet is valuable and usually quite accurate.

How does ConceptNet compare to the Google Knowledge Graph?

ConceptNet is linked open data, and that makes it fundamentally a different thing than a proprietary knowledge base.

Google's Knowledge Graph is a brand name on top of the structured knowledge that it takes to run the Google search engine, Google Assistant, and probably other applications. It provides those sidebars of facts you get when you search for things on Google, and it provides answers to questions that you ask the Google Assistant. It seems to focus largely on things you can buy and things you can look up on Wikipedia. (In ConceptNet, we focus more on the general meanings of all words, whether they be nouns, verbs, adjectives, or adverbs, and less on named entities.)

I assume it's a very well-designed knowledge representation for a search engine. And there is only one search engine that it can power. Fundamentally, the Google Knowledge Graph supports the ability to interact with Google products on Google's terms.

Unlike the typical corporate knowledge base, ConceptNet has remained true to its crowdsourcing roots. While it's a project developed at Luminoso, it is open for anyone to use under a Creative Commons license. This is the fair thing to do, given how much of it depends on public contributions and linked data, but it's also part of Luminoso's ideals. When we let you see and use our state-of-the-art knowledge representation first-hand, it promotes understanding of why Luminoso's products are a better approach to NLP.

How does ConceptNet compare to BabelNet?

BabelNet is very similar in structure to ConceptNet, but very different in openness.

BabelNet uses many of the same knowledge sources as ConceptNet. It lacks the Open Mind Common Sense and Games with a Purpose data, which provide ConceptNet with a wide range of noisy but effective relational knowledge. It does, on the other hand, have a representation of WordNet-style word senses that ConceptNet doesn't have.

As of 2018, BabelNet is proprietary and not available to the public. You may find this surprising given how they've touted their openness in the past, and given that it's built on Creative Commons Share-Alike resources, but check their site. You won't find a download link.

They allow you to submit an application to use it for research purposes only, if you meet the requirements of having academic credentials and a current academic affiliation.

How does ConceptNet compare to DBPedia?

DBPedia is very much focused on named entities. It's messier than ConceptNet. Its vocabulary consists only of titles of Wikipedia articles.

DBPedia contains information that can be used for answering specific questions, such as "Where is the birthplace of John Adams?" or "What countries have a population of over 10 million?". It particularly knows a lot about locations, movies, and music albums. You could use DBPedia to solve Six Degrees of Kevin Bacon.

ConceptNet imports a small amount of DBPedia, and also contains external links to DBPedia and Wikidata.

How does ConceptNet compare to DBnary?

DBnary is a counterpart to DBPedia that's actually quite compatible with ConceptNet. Like ConceptNet, it focuses on word definitions rather than named entities, and it gets them from parsing Wiktionary.

Right now we use our own Wiktionary parser, which covers fewer Wiktionary sites than DBnary does but extracts more detail from each entry. We would gladly use DBnary instead, if DBnary starts extracting information such as links from definitions.

How does ConceptNet compare to (Open)Cyc?

Cyc was an ontology built on a predicate logic representation called CycL. CycL enabled very precise reasoning in a way that machine learning over ConceptNet doesn't. However, Cyc was intolerant of errors, and adding information to Cyc was a difficult task that kept Cycorp occupied for over 30 years.

OpenCyc provides a hierarchy of types of things, with English names, some of which are automatically generated. It seems to be intended as a preview of the full Cyc system, a proprietary system that was shut down in 2017.

ConceptNet includes a subset of OpenCyc, consisting of the IsA statements that can be reasonably represented in natural language.

How does ConceptNet compare to the Microsoft Concept Graph?

The Microsoft Concept Graph is a proprietary taxonomy of English nouns, connected with the "IsA" relation, with some automatic word sense disambiguation. Its data comes from machine reading of a Web search index. It resembles an automatically-generated version of OpenCyc, and is derived from an earlier project named Probase.

The Microsoft Concept Graph was shut down in 2018.

Knowledge representation

How many statements (edges) are there in ConceptNet?

Approximately 28 million.

Does ConceptNet use logical predicates?

No. Its representation is words and phrases of natural language, and relations between them. Natural language can be vague, illogical, and incredibly useful.

How many languages is ConceptNet in?

The data that ConceptNet is built from spans a lot of different languages, with a long tail of marginally-represented languages. 10 languages have core support, 77 languages have moderate support, and 304 languages are supported in total. See Languages for a complete list.

ConceptNet is missing facts.

This will always be true. We use machine-learning techniques, including word embeddings, to learn generalizable things from ConceptNet despite the incompleteness of the knowledge it contains.

ConceptNet contains false information.

There will probably always be isolated mistakes or falsehoods in ConceptNet. Our data sources and our processes are not perfect. Machine learning can be relatively robust against errors, as long as the errors are not systematic.

If you've identified a systematic source of errors in ConceptNet, that is more important. It would probably improve ConceptNet to get rid of it. In that case, please go to the 'Issues' tab and describe it in an issue report.

What are the relations represented in ConceptNet? What do they mean?

See the table on the Relations page of this wiki.

Where do the edge weights in ConceptNet come from?

Made-up numbers that are programmed into the reader modules that import various sources of knowledge. These weights represent a rough heuristic of which statements you should trust more than other statements.

Can I add new information to ConceptNet?

During the golden age of crowdsourcing (the decade of the 2000s), ConceptNet accepted direct contributions of knowledge. This was a great start, but now the opportunities for improving ConceptNet have changed, and we are content to leave crowdsourcing to the organizations that are really good at it, like the Wikimedia Foundation.

If you contribute to Wiktionary and follow their guidelines, the information you contribute will eventually be represented in ConceptNet.

What I mean is, can I make my own version of ConceptNet that includes information that I need in my domain?

Well, you can reproduce ConceptNet's build process and change the code to import a new source of data. This may or may not accomplish what you want.

What ConceptNet is designed for is representing general knowledge. Making a useful domain-specific semantic model is a rather different process, in our experience. The software we built on top of ConceptNet to make this possible eventually became our company, Luminoso. Luminoso provides software as a service that creates domain-specific semantic models, which make use of ConceptNet so they can start out knowing what words mean and just have to learn what's different in your domain.

Technologies

What kind of database does ConceptNet use?

We've tried a lot of them. Currently PostgreSQL.

Why not a graph database? Why not [insert new database name here]?

Probably one of the following reasons:

  • It isn't as efficient as PostgreSQL
  • It doesn't actually work as advertised
  • It is no longer maintained
  • It doesn't provide a good workflow for importing a medium-sized graph such as ConceptNet
  • It takes more than a day to import a medium-sized graph such as ConceptNet
  • It inflates the size of the data it stores by a factor of more than 10
  • It assumes every user has access to and wants to use a distributed computing cluster
  • It would be hard for people who want their own copy of ConceptNet to install it
  • It's not free software
  • It has a restriction on it that would prevent people from reusing ConceptNet, such as the GPL or "academic use only"

If you think you know of a database that doesn't fail one of these criteria, I'd still be interested to hear about it.

Is ConceptNet "big data"?

It fits on a hard disk, so no. It's enough data for many purposes. But text is small.

If you have textual knowledge that actually requires distributed computation, you work at a company that does Web search.

Is there a graph visualization of ConceptNet?

You're asking about a visualization like this, right?

Notice that that graph is a few thousand times smaller than ConceptNet and it's already an incomprehensible rainbow-colored hairball. I am not convinced there's a technology that exists that can put all of ConceptNet in one meaningful image, although there may be an approach that involves spreading it out into local clusters using t-SNE.

It will almost certainly involve custom code -- ConceptNet makes off-the-shelf graph visualizers collapse under the insoluble problem of laying out its edges. I'm interested in making such a visualization, but the result has to be informative, not just a hairball.

Can ConceptNet be queried using SPARQL?

No. SPARQL is computationally infeasible. Similar projects that use SPARQL have unacceptable latency and go down whenever anyone starts using them in earnest.

The way to query ConceptNet is using a rather straightforward REST API, described on the API page. If you need to make a form of query that this API doesn't support, open an issue and we'll look into supporting it.

AI hype

I heard that ConceptNet has the intelligence of a 4-year-old, is this true?

Blame science reporting for doing what it usually does. There's a nugget of truth in there surrounded by a big wad of meaningless AI hype. It's true that ConceptNet 4 could compete with 4-year-olds on a particular question-answering task -- and ConceptNet 5 performs much better on a similar task. This is cool. It doesn't mean that anyone's about to make robot children.

Here's the background: A much older version of ConceptNet, ConceptNet 4, was evaluated on some intelligence tests involving question-answering and sentence comprehension. The researchers who performed these tests compared ConceptNet's performance to a 4-year-old child.

We found the comparison odd but flattering. 4-year-old children are incredible beings. They have desires, goals, and imagination, and they can communicate them in their spoken language with a level of competence that second-language learners have to put tremendous effort into achieving. No real AI system can come close to emulating the range of things a child can do.

When it comes to the narrower task of answering questions, though, it's believable that ConceptNet 4 compared to a 4-year-old. We're always interested in measurably improving the general intelligence contained in ConceptNet. Excitingly, we now have a question-answering task in which ConceptNet 5 compares to a 17-year-old: that of answering SAT-style analogy questions.

But there is much more to be done. The Story Cloze Test is a test of story understanding that any human can score close to 100% on in their native language. Natural language AI systems, including ConceptNet, have not yet surpassed 75% on this test.

Clone this wiki locally