From 131697fde7c1f5db9956c31c390241c3b6fc963c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?C=C4=83t=C4=83lin=20Mari=C8=99?= Date: Sat, 20 Sep 2014 17:26:29 +0300 Subject: [PATCH] Remove the `cross-domain` phras from `README.md` MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The part about `Cross-domain Ajax and Flash` from the `README.md` file isn't accurate, as by default: * the `crossdomain.xml` file doesn't grant a web client — such as Adobe Flash Player, Adobe Reader, etc. — permission to handle data across multiple domains * the Apache server configs, do not allow cross-origin access to all resources, unless the user enables that behavior --- README.md | 1 - dist/doc/misc.md | 2 +- src/doc/misc.md | 2 +- 3 files changed, 2 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 59f9eb2c56..405058a6f2 100644 --- a/README.md +++ b/README.md @@ -45,7 +45,6 @@ Choose one of the following options: * An optimized Google Analytics snippet. * Apache server caching, compression, and other configuration defaults for Grade-A performance. -* Cross-domain Ajax and Flash. * "Delete-key friendly." Easy to strip out parts you don't need. * Extensive inline and accompanying documentation. diff --git a/dist/doc/misc.md b/dist/doc/misc.md index d259bdd897..c23aa5bca6 100644 --- a/dist/doc/misc.md +++ b/dist/doc/misc.md @@ -166,7 +166,7 @@ If you want to disallow certain pages you will need to specify the path in a `Disallow` directive (e.g.: `Disallow: /path`) or, if you want to disallow crawling of all content, use `Disallow: /`. -The '/robots.txt' file is not intended for access control, so don't try to +The `/robots.txt` file is not intended for access control, so don't try to use it as such. Think of it as a "No Entry" sign, rather than a locked door. URLs disallowed by the `robots.txt` file might still be indexed without being crawled, and the content from within the `robots.txt` file can be viewed by diff --git a/src/doc/misc.md b/src/doc/misc.md index d259bdd897..c23aa5bca6 100644 --- a/src/doc/misc.md +++ b/src/doc/misc.md @@ -166,7 +166,7 @@ If you want to disallow certain pages you will need to specify the path in a `Disallow` directive (e.g.: `Disallow: /path`) or, if you want to disallow crawling of all content, use `Disallow: /`. -The '/robots.txt' file is not intended for access control, so don't try to +The `/robots.txt` file is not intended for access control, so don't try to use it as such. Think of it as a "No Entry" sign, rather than a locked door. URLs disallowed by the `robots.txt` file might still be indexed without being crawled, and the content from within the `robots.txt` file can be viewed by