From a439ebf8c8b858ef7104b08e33c3b228ef1cdbf9 Mon Sep 17 00:00:00 2001 From: Eric Pugh Date: Mon, 22 Jul 2024 12:24:44 -0400 Subject: [PATCH] Eliminate linux/window tabs for bin/solr post. (#2579) Also use the same style of formatting for the commands. --- .../deployment-guide/pages/enabling-ssl.adoc | 6 +-- .../getting-started/pages/tutorial-films.adoc | 43 --------------- .../pages/tutorial-paramsets.adoc | 16 ------ .../pages/tutorial-techproducts.adoc | 7 --- .../pages/tutorial-vectors.adoc | 17 ------ .../pages/indexing-with-tika.adoc | 24 ++++----- .../indexing-guide/pages/post-tool.adoc | 52 +++++++++---------- .../query-guide/pages/spatial-search.adoc | 6 ++- .../query-guide/pages/tagger-handler.adoc | 12 ++--- 9 files changed, 51 insertions(+), 132 deletions(-) diff --git a/solr/solr-ref-guide/modules/deployment-guide/pages/enabling-ssl.adoc b/solr/solr-ref-guide/modules/deployment-guide/pages/enabling-ssl.adoc index cae6b1cd1b6..64eac73e73d 100644 --- a/solr/solr-ref-guide/modules/deployment-guide/pages/enabling-ssl.adoc +++ b/solr/solr-ref-guide/modules/deployment-guide/pages/enabling-ssl.adoc @@ -435,7 +435,7 @@ You should get a response that looks like this: Use `bin/solr post` to index some example documents to the SolrCloud collection created above: -[source,bash] +[source,console] ---- $ bin/solr post --solr-update-url https://localhost:8984/solr/mycollection/update example/exampledocs/*.xml ---- @@ -445,9 +445,9 @@ $ bin/solr post --solr-update-url https://localhost:8984/solr/mycollection/updat Use curl to query the SolrCloud collection created above, from a directory containing the PEM formatted certificate and key created above (e.g., `example/etc/`). If you have not enabled client authentication (system property `-Djetty.ssl.clientAuth=true)`, then you can remove the `-E solr-ssl.pem:secret` option: -[source,bash] +[source,console] ---- -curl -E solr-ssl.pem:secret --cacert solr-ssl.pem "https://localhost:8984/solr/mycollection/select?q=*:*" +$ curl -E solr-ssl.pem:secret --cacert solr-ssl.pem "https://localhost:8984/solr/mycollection/select?q=*:*" ---- === Index a Document using CloudSolrClient diff --git a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-films.adoc b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-films.adoc index 269610912a8..8191682735c 100644 --- a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-films.adoc +++ b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-films.adoc @@ -222,72 +222,29 @@ Pick one of the formats and index it into the "films" collection (in each exampl .To Index JSON Format [tabs#index-json] ====== -Linux/Mac:: -+ -==== -[,console] ---- $ bin/solr post -c films example/films/films.json - ----- -==== - -Windows:: -+ -==== -[,console] ----- -$ bin/solr post -c films example\films\films.json ---- -==== ====== .To Index XML Format [tabs#index-xml] ====== -Linux/Mac:: -+ -==== [,console] ---- $ bin/solr post -c films example/films/films.xml - ----- -==== - -Windows:: -+ -==== -[,console] ---- -$ bin/solr post -c films example\films\films.xml ----- -==== ====== .To Index CSV Format [tabs#index-csv] ====== -Linux/Mac:: -+ -==== [,console] ---- $ bin/solr post -c films example/films/films.csv -params "f.genre.split=true&f.directed_by.split=true&f.genre.separator=|&f.directed_by.separator=|" - ----- -==== - -Windows:: -+ -==== -[,console] ----- -$ bin/solr post -c films example\films\films.csv -params "f.genre.split=true&f.directed_by.split=true&f.genre.separator=|&f.directed_by.separator=|" ---- -==== ====== Each command includes these main parameters: diff --git a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-paramsets.adoc b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-paramsets.adoc index a0a58bfe450..1b769e2322c 100644 --- a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-paramsets.adoc +++ b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-paramsets.adoc @@ -69,26 +69,10 @@ and for the release date also to be single valued. Now that we have updated our Schema, we need to index the sample film data, or, if you already have indexed it, then re-index it to take advantage of the new field definitions we added. -[tabs#index-json] -====== -Linux/Mac:: -+ -==== [,console] ---- $ bin/solr post -c films example/films/films.json ---- -==== - -Windows:: -+ -==== -[,console] ----- -$ bin/solr post -c films example\films\films.json ----- -==== -====== === Let's get Searching! diff --git a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-techproducts.adoc b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-techproducts.adoc index e00e5b19991..77b4aaa4ca0 100644 --- a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-techproducts.adoc +++ b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-techproducts.adoc @@ -167,18 +167,11 @@ You'll need a command shell to run some of the following examples, rooted in the The data we will index is in the `example/exampledocs` directory. The documents are in a mix of document formats (JSON, CSV, etc.), and fortunately we can index them all at once: -.Linux/Mac [,console] ---- $ bin/solr post -c techproducts example/exampledocs/* ---- -.Windows -[,console] ----- -$ bin/solr post -c techproducts example\exampledocs\* ----- - You should see output similar to the following: [,console] diff --git a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-vectors.adoc b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-vectors.adoc index 90c84dc5345..133927b74c1 100644 --- a/solr/solr-ref-guide/modules/getting-started/pages/tutorial-vectors.adoc +++ b/solr/solr-ref-guide/modules/getting-started/pages/tutorial-vectors.adoc @@ -75,27 +75,10 @@ $ curl http://localhost:8983/solr/films/schema -X POST -H 'Content-type:applicat We have the vectors embedded in our `films.json` file, so let's index that data, taking advantage of our new schema field we just defined. -[tabs#index-json] -====== -Linux/Mac:: -+ -==== [,console] ---- $ bin/solr post -c films example/films/films.json - ---- -==== - -Windows:: -+ -==== -[,console] ----- -$ bin/solr post -c films example\films\films.json ----- -==== -====== === Let's do some Vector searches Before making the queries, we define an example target vector, simulating a person that diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-with-tika.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-with-tika.adoc index d32fe2b7b75..8800d85c018 100644 --- a/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-with-tika.adoc +++ b/solr/solr-ref-guide/modules/indexing-guide/pages/indexing-with-tika.adoc @@ -126,9 +126,9 @@ Note this includes the path, so if you upload a different file, always be sure t You can also use `bin/solr post` to do the same thing: -[source,bash] +[,console] ---- -bin/solr post -c gettingstarted example/exampledocs/solr-word.pdf -params "literal.id=doc1" +$ bin/solr post -c gettingstarted example/exampledocs/solr-word.pdf -params "literal.id=doc1" ---- Now you can execute a query and find that document with a request like `\http://localhost:8983/solr/gettingstarted/select?q=pdf`. @@ -146,9 +146,9 @@ The dynamic field `ignored_*` is good for this purpose. For the fields you do want to map, explicitly set them using `fmap.IN=OUT` and/or ensure the field is defined in the schema. Here's an example: -[source,bash] +[,console] ---- -bin/solr post -c gettingstarted example/exampledocs/solr-word.pdf -params "literal.id=doc1&uprefix=ignored_&fmap.last_modified=last_modified_dt" +$ bin/solr post -c gettingstarted example/exampledocs/solr-word.pdf -params "literal.id=doc1&uprefix=ignored_&fmap.last_modified=last_modified_dt" ---- [NOTE] @@ -561,18 +561,18 @@ If `literalsOverride=false`, literals will be appended as multi-value to the Tik The command below captures `
` tags separately (`capture=div`), and then maps all the instances of that field to a dynamic field named `foo_t` (`fmap.div=foo_t`). -[source,bash] +[,console] ---- -bin/solr post -c gettingstarted example/exampledocs/sample.html -params "literal.id=doc2&captureAttr=true&defaultField=_text_&fmap.div=foo_t&capture=div" +$ bin/solr post -c gettingstarted example/exampledocs/sample.html -params "literal.id=doc2&captureAttr=true&defaultField=_text_&fmap.div=foo_t&capture=div" ---- === Using Literals to Define Custom Metadata To add in your own metadata, pass in the literal parameter along with the file: -[source,bash] +[,console] ---- -bin/solr post -c gettingstarted -params "literal.id=doc4&captureAttr=true&defaultField=text&capture=div&fmap.div=foo_t&literal.blah_s=Bah" example/exampledocs/sample.html +$ bin/solr post -c gettingstarted -params "literal.id=doc4&captureAttr=true&defaultField=text&capture=div&fmap.div=foo_t&literal.blah_s=Bah" example/exampledocs/sample.html ---- The parameter `literal.blah_s=Bah` will insert a field `blah_s` into every document. @@ -582,9 +582,9 @@ Every instance of the text will be "Bah". The example below passes in an XPath expression to restrict the XHTML returned by Tika: -[source,bash] +[,console] ---- -bin/solr post -c gettingstarted -params "literal.id=doc5&captureAttr=true&defaultField=text&capture=div&fmap.div=foo_t&xpath=/xhtml:html/xhtml:body/xhtml:div//node()" example/exampledocs/sample.html +$ bin/solr post -c gettingstarted -params "literal.id=doc5&captureAttr=true&defaultField=text&capture=div&fmap.div=foo_t&xpath=/xhtml:html/xhtml:body/xhtml:div//node()" example/exampledocs/sample.html ---- === Extracting Data without Indexing @@ -601,9 +601,9 @@ curl "http://localhost:8983/solr/gettingstarted/update/extract?&extractOnly=true The output includes XML generated by Tika (and further escaped by Solr's XML) using a different output format to make it more readable (`-out yes` instructs the tool to echo Solr's output to the console): -[source,bash] +[,console] ---- -bin/solr post -c gettingstarted -params "extractOnly=true&wt=ruby&indent=true" -out yes example/exampledocs/sample.html +$ bin/solr post -c gettingstarted -params "extractOnly=true&wt=ruby&indent=true" -out yes example/exampledocs/sample.html ---- === Using Solr Cell with a POST Request diff --git a/solr/solr-ref-guide/modules/indexing-guide/pages/post-tool.adoc b/solr/solr-ref-guide/modules/indexing-guide/pages/post-tool.adoc index 5e250e065f1..b6cd39a86fa 100644 --- a/solr/solr-ref-guide/modules/indexing-guide/pages/post-tool.adoc +++ b/solr/solr-ref-guide/modules/indexing-guide/pages/post-tool.adoc @@ -110,48 +110,48 @@ This section presents several examples. Index all JSON files into `gettingstarted`. -[source,bash] +[,console] ---- -bin/solr post -url http://localhost:8983/solr/gettingstarted/update *.json +$ bin/solr post -url http://localhost:8983/solr/gettingstarted/update *.json ---- === Indexing XML Add all documents with file extension `.xml` to the collection named `gettingstarted`. -[source,bash] +[,console] ---- -bin/solr post -url http://localhost:8983/solr/gettingstarted/update *.xml +$ bin/solr post -url http://localhost:8983/solr/gettingstarted/update *.xml ---- Add all documents starting with `article` with file extension `.xml` to the `gettingstarted` collection on Solr running on port `8984`. -[source,bash] +[,console] ---- -bin/solr post -url http://localhost:8984/solr/gettingstarted/update article*.xml +$ bin/solr post -url http://localhost:8984/solr/gettingstarted/update article*.xml ---- Send XML arguments to delete a document from `gettingstarted`. -[source,bash] +[,console] ---- -bin/solr post -url http://localhost:8983/solr/gettingstarted/update -mode args -type application/xml '42' +$ bin/solr post -url http://localhost:8983/solr/gettingstarted/update -mode args -type application/xml '42' ---- === Indexing CSV and JSON Index all CSV and JSON files into `gettingstarted` from current directory: -[source,bash] +[,console] ---- -bin/solr post -c gettingstarted -filetypes json,csv . +$ bin/solr post -c gettingstarted -filetypes json,csv . ---- Index a tab-separated file into `gettingstarted`: -[source,bash] +[,console] ---- -bin/solr post -url http://localhost:8984/solr/signals/update -params "separator=%09" -type text/csv data.tsv +$ bin/solr post -url http://localhost:8984/solr/signals/update -params "separator=%09" -type text/csv data.tsv ---- The content type (`-type`) parameter is required to treat the file as the proper type, otherwise it will be ignored and a WARNING logged as it does not know what type of content a .tsv file is. @@ -161,32 +161,32 @@ The xref:indexing-with-update-handlers.adoc#csv-formatted-index-updates[CSV hand Index a PDF file into `gettingstarted`. -[source,bash] +[,console] ---- -bin/solr post -url http://localhost:8983/solr/gettingstarted/update a.pdf +$ bin/solr post -url http://localhost:8983/solr/gettingstarted/update a.pdf ---- Automatically detect content types in a folder, and recursively scan it for documents for indexing into `gettingstarted`. -[source,bash] +[,console] ---- -bin/solr post -url http://localhost:8983/solr/gettingstarted/update afolder/ +$ bin/solr post -url http://localhost:8983/solr/gettingstarted/update afolder/ ---- Automatically detect content types in a folder, but limit it to PPT and HTML files and index into `gettingstarted`. -[source,bash] +[,console] ---- -bin/solr post -url http://localhost:8983/solr/gettingstarted/update -filetypes ppt,html afolder/ +$ bin/solr post -url http://localhost:8983/solr/gettingstarted/update -filetypes ppt,html afolder/ ---- === Indexing to a Password Protected Solr (Basic Auth) Index a PDF as the user "solr" with password "SolrRocks": -[source,bash] +[,console] ---- -bin/solr post -u solr:SolrRocks -url http://localhost:8983/solr/gettingstarted/update a.pdf +$ bin/solr post -u solr:SolrRocks -url http://localhost:8983/solr/gettingstarted/update a.pdf ---- === Crawling a Website to Index Documents @@ -195,9 +195,9 @@ Crawl the Apache Solr website going one layer deep and indexing the pages into S See xref:indexing-with-tika.adoc#trying-out-solr-cell[Trying Out Solr Cell] to learn more about setting up Solr for extracting content from web pages. -[source,bash] +[,console] ---- -bin/solr post -mode web -c gettingstarted -recursive 1 -delay 1 https://solr.apache.org/ +$ bin/solr post -mode web -c gettingstarted -recursive 1 -delay 1 https://solr.apache.org/ ---- === Standard Input as Source for Indexing @@ -205,16 +205,16 @@ bin/solr post -mode web -c gettingstarted -recursive 1 -delay 1 https://solr.apa You can use the standard input as your source for data to index. Notice the `-out` providing raw responses from Solr. -[source,bash] +[,console] ---- -echo '{commit: {}}' | bin/solr post -mode stdin -url http://localhost:8983/my_collection/update -out +$ echo '{commit: {}}' | bin/solr post -mode stdin -url http://localhost:8983/my_collection/update -out ---- === Raw Data as Source for Indexing Provide the raw document as a string for indexing. -[source,bash] +[,console] ---- -bin/solr post -url http://localhost:8983/signals/update -mode args -type text/csv -out $'id,value\n1,0.47' +$ bin/solr post -url http://localhost:8983/signals/update -mode args -type text/csv -out $'id,value\n1,0.47' ---- diff --git a/solr/solr-ref-guide/modules/query-guide/pages/spatial-search.adoc b/solr/solr-ref-guide/modules/query-guide/pages/spatial-search.adoc index c7a8bda74e9..c8fdc34827a 100644 --- a/solr/solr-ref-guide/modules/query-guide/pages/spatial-search.adoc +++ b/solr/solr-ref-guide/modules/query-guide/pages/spatial-search.adoc @@ -70,8 +70,10 @@ However, it's much bulkier than the raw coordinates for such simple data. Using the `bin/solr post` tool: -[source,text] -bin/solr post -type "application/json" -url "http://localhost:8983/solr/mycollection/update?format=geojson" /path/to/geojson.file +[,console] +---- +$ bin/solr post -type "application/json" -url "http://localhost:8983/solr/mycollection/update?format=geojson" /path/to/geojson.file +---- The key parameter to pass in with your request is: diff --git a/solr/solr-ref-guide/modules/query-guide/pages/tagger-handler.adoc b/solr/solr-ref-guide/modules/query-guide/pages/tagger-handler.adoc index 31ef089f593..7ec9849d24c 100644 --- a/solr/solr-ref-guide/modules/query-guide/pages/tagger-handler.adoc +++ b/solr/solr-ref-guide/modules/query-guide/pages/tagger-handler.adoc @@ -275,18 +275,18 @@ should be almost 7MB file expanding to a cities1000.txt file around population. Using bin/solr post: -[source,bash] +[,console] ---- -bin/solr post -c geonames -type text/csv \ +$ bin/solr post -c geonames -type text/csv \ -params 'optimize=true&maxSegments=1&separator=%09&encapsulator=%00&fieldnames=id,name,,alternative_names,latitude,longitude,,,countrycode,,,,,,population,elevation,,timezone,lastupdate' \ /tmp/cities1000.txt ---- or using curl: -[source,bash] +[,console] ---- -curl -X POST --data-binary @/path/to/cities1000.txt -H 'Content-type:application/csv' \ +$ curl -X POST --data-binary @/path/to/cities1000.txt -H 'Content-type:application/csv' \ 'http://localhost:8983/solr/geonames/update?commit=true&optimize=true&maxSegments=1&separator=%09&encapsulator=%00&fieldnames=id,name,,alternative_names,latitude,longitude,,,countrycode,,,,,,population,elevation,,timezone,lastupdate' ---- @@ -306,9 +306,9 @@ This is a trivial example tagging a small piece of text. For more options, see the earlier documentation. -[source,bash] +[,console] ---- -curl -X POST \ +$ curl -X POST \ 'http://localhost:8983/solr/geonames/tag?overlaps=NO_SUB&tagsLimit=5000&fl=id,name,countrycode&wt=json&indent=on' \ -H 'Content-Type:text/plain' -d 'Hello New York City' ----