From 3c57f71b66b45a0a35e6265a55daab5c2cdddf17 Mon Sep 17 00:00:00 2001 From: Pierre-Antoine Champin Date: Wed, 14 Aug 2024 12:39:27 +0200 Subject: [PATCH 1/2] typos intro --- specs/solid-indexing/index.html | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/specs/solid-indexing/index.html b/specs/solid-indexing/index.html index 1958f6c..785c7f8 100644 --- a/specs/solid-indexing/index.html +++ b/specs/solid-indexing/index.html @@ -33,11 +33,11 @@

Introduction

The Web is made of documents like web pages, API data, images and so on that are linked together. They are - information that anyone with the proper authorization can access to. Documents can be found everywhere + information that anyone with the proper authorization can access. Documents can be found everywhere on the web and we often have to follow the links between them in order to find relevant information. The web is therefore browsable but browsing it manually can be very long to find what we are searching for. That's why search engines have been invented. They browse the web for us by following the - links they find in documents so they can tell us where to get documents about a particular subject. + links, they find in documents so they can tell us where to get documents about a particular subject.

Search engines are doing indexing. Indexing is a largely used mechanism that allow to find data faster thanks to @@ -51,8 +51,8 @@

Introduction

traditionnal Web except that information is described to machines so they can "understand" what documents and the things they contain are about. This way we can directly get the information we need without having to look at the content of documents. This is made possible thanks to the Resource Description Framework (RDF). On this improved Web, - knowledge can be deducted automatically by machines. For instance, if a document is about a person and the machines - know a person is a human, they can deduce that this person is a human. Search engines can benefit a lot from the web + knowledge can be deduced automatically by machines. For instance, if a document is about a person and the machines + know that a person is a human, they can deduce that this person is a human. Search engines can benefit a lot from the web of data and some engines already take advantage of it to better respond to our queries.

@@ -305,4 +305,4 @@

Relation to Solid Type indexes

- \ No newline at end of file + From 6c1a9b8c3806a5bfbc244622b8a2356cab87bb3f Mon Sep 17 00:00:00 2001 From: Pierre-Antoine Champin Date: Wed, 14 Aug 2024 14:53:15 +0200 Subject: [PATCH 2/2] typos section 2 --- specs/solid-indexing/index.html | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/specs/solid-indexing/index.html b/specs/solid-indexing/index.html index 785c7f8..8365b15 100644 --- a/specs/solid-indexing/index.html +++ b/specs/solid-indexing/index.html @@ -121,7 +121,7 @@

Namespaces

Indexes

-

This document proposes the Indexing ontology as vocabulary for the indexes. +

This document proposes the Indexing ontology as vocabulary for describing indexes. This ontology is using [[SHACL]] shapes to express what is indexed.

@@ -129,7 +129,7 @@

Indexes

General indexes

An index is a RDF document [[RDF11-CONCEPTS]] of type idx:Index containing entries which point to - instances conforming to a particular shape. The shape the index or an entry is targeting is expressed with the + instances conforming to a particular shape. The shape that the index or an entry is targeting is expressed with the idx:hasShape predicate.

@@ -152,22 +152,21 @@

General indexes

-

Meta indexes

+

Meta-indexes

-

Meta indexes are indexes that are indexing other indexes. They can be used to divide a entire index into - smaller parts. While more queries are needed to load the data of interest, meta indexes might reduce the +

Meta-indexes are indexes that are indexing other indexes. They can be used to divide a entire index into + smaller parts. While more queries are needed to load the data of interest, meta-indexes might reduce the size of the transfered data by targeting parts with precision. It can also give faster results especially - when combined with an heuristic like detailed in the source selection section.

+ when combined with a heuristics like the one detailed in .

-