From e127ec106fd43406dad29a03941dac4b16704c9a Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 18:25:37 +0530 Subject: [PATCH 01/45] Update README.md --- README.md | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index b40b68b..4f7a19d 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,9 @@ -# Amazon-Flipkart-Price-Comparison-Engine +

+ + Amazon-logo +

+ +

Amazon-Flipkart Price Comparison Engine

[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) [![CodeFactor](https://www.codefactor.io/repository/github/sushantpatrikar/amazon-flipkart-price-comparison-engine/badge)](https://www.codefactor.io/repository/github/sushantpatrikar/amazon-flipkart-price-comparison-engine) ![stars](https://img.shields.io/github/stars/sushantPatrikar/Amazon-Flipkart-Price-Comparison-Engine.svg) From 38a38efda6cff7b450eb1ca624b5cdf20f26340b Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 18:30:32 +0530 Subject: [PATCH 02/45] Update README.md --- README.md | 17 +++++++++++------ 1 file changed, 11 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 4f7a19d..446f442 100644 --- a/README.md +++ b/README.md @@ -4,12 +4,17 @@

Amazon-Flipkart Price Comparison Engine

-[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) -[![CodeFactor](https://www.codefactor.io/repository/github/sushantpatrikar/amazon-flipkart-price-comparison-engine/badge)](https://www.codefactor.io/repository/github/sushantpatrikar/amazon-flipkart-price-comparison-engine) -![stars](https://img.shields.io/github/stars/sushantPatrikar/Amazon-Flipkart-Price-Comparison-Engine.svg) -![forks](https://img.shields.io/github/forks/sushantPatrikar/Amazon-Flipkart-Price-Comparison-Engine.svg) - - +
+ + + + + + + +
+ +
Compares price of the product entered by an user from e-commerce sites Amazon and Flipkart ## How to use From 3438bccb036d122e1255eaf9dc6fcd4f4cc493e1 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 18:31:24 +0530 Subject: [PATCH 03/45] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 446f442..28fea44 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ Amazon-logo

-

Amazon-Flipkart Price Comparison Engine

+
@@ -13,7 +13,7 @@
- +

Amazon-Flipkart Price Comparison Engine


Compares price of the product entered by an user from e-commerce sites Amazon and Flipkart From 4b24d0e26e453ba3315a892420cf4539d54c74b4 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 18:33:46 +0530 Subject: [PATCH 04/45] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 28fea44..70a9931 100644 --- a/README.md +++ b/README.md @@ -15,7 +15,7 @@

Amazon-Flipkart Price Comparison Engine


-Compares price of the product entered by an user from e-commerce sites Amazon and Flipkart +

Compares price of the product entered by an user from e-commerce sites Amazon and Flipkart.

## How to use After running this program a window is popped up which asks user to enter the product From ff651f65720833778d6bf49e2ef50aa95e7f71e3 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 18:34:57 +0530 Subject: [PATCH 05/45] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 70a9931..802acba 100644 --- a/README.md +++ b/README.md @@ -14,9 +14,9 @@

Amazon-Flipkart Price Comparison Engine

-
-

Compares price of the product entered by an user from e-commerce sites Amazon and Flipkart.

+

Compares price of the product entered by an user from e-commerce sites Amazon and Flipkart.

+
## How to use After running this program a window is popped up which asks user to enter the product From 3b16ce794283c7eacb3f570ea74b50ea03f2a842 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 18:36:19 +0530 Subject: [PATCH 06/45] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 802acba..25cfc58 100644 --- a/README.md +++ b/README.md @@ -3,7 +3,7 @@ Amazon-logo

- +

Amazon-Flipkart Price Comparison Engine

@@ -13,7 +13,7 @@
-

Amazon-Flipkart Price Comparison Engine

+

Compares price of the product entered by an user from e-commerce sites Amazon and Flipkart.


From bad52c60dd2104ed6309cc183ed0c4242a75e08e Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 18:36:53 +0530 Subject: [PATCH 07/45] Update README.md --- README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/README.md b/README.md index 25cfc58..f428179 100644 --- a/README.md +++ b/README.md @@ -17,6 +17,9 @@

Compares price of the product entered by an user from e-commerce sites Amazon and Flipkart.


+ + + ## How to use After running this program a window is popped up which asks user to enter the product From a1a98760b5fab2666726cd94b99947b1941c1ccd Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 18:38:40 +0530 Subject: [PATCH 08/45] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index f428179..8ba340f 100644 --- a/README.md +++ b/README.md @@ -23,7 +23,7 @@ ## How to use After running this program a window is popped up which asks user to enter the product -![screenshot 5](https://user-images.githubusercontent.com/40419750/42380586-114b5d8e-814c-11e8-9147-e24ad9a309a6.png) + After entering the product and clicking on the 'Find' button it will take us to another window which will show the title of the product on both sites and there corresponding prices From d4fead3c0ee3519bbc925028ffa8397cc9deedd8 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 18:40:33 +0530 Subject: [PATCH 09/45] Update README.md --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 8ba340f..d8aab15 100644 --- a/README.md +++ b/README.md @@ -22,9 +22,9 @@ ## How to use After running this program a window is popped up which asks user to enter the product - - - +

+ +

After entering the product and clicking on the 'Find' button it will take us to another window which will show the title of the product on both sites and there corresponding prices ![screenshot 6](https://user-images.githubusercontent.com/40419750/42381017-687b5cfc-814d-11e8-9312-8a46054e5286.png) From 88fc24c77b3d394637578f7791dd380fbc4c2522 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 18:44:18 +0530 Subject: [PATCH 10/45] Update README.md --- README.md | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index d8aab15..03289af 100644 --- a/README.md +++ b/README.md @@ -21,17 +21,21 @@ ## How to use -After running this program a window is popped up which asks user to enter the product +

After running this program a window is popped up which asks user to enter the product.

-After entering the product and clicking on the 'Find' button it will take us to another window which will show the title of the product on both sites and there corresponding prices -![screenshot 6](https://user-images.githubusercontent.com/40419750/42381017-687b5cfc-814d-11e8-9312-8a46054e5286.png) +

After entering the product and clicking on the 'Find' button it will take us to another window which will show the title of the product on both sites and there corresponding prices.

-If you didn't get the desired product then click on the title to get suggestions related to your search +

+ +

-![screenshot 7](https://user-images.githubusercontent.com/40419750/42381407-90155cd0-814e-11e8-931a-7cef280047cc.png) +

If you didn't get the desired product then click on the title to get suggestions related to your search.

+

+ +

Select the desired product from the suggestions for both sites and then click on the 'Search' button to get their corresponding prices From 8e7077741785c62d5644dbd10f14d3b179c9be79 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 18:46:34 +0530 Subject: [PATCH 11/45] Update README.md --- README.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index 03289af..d2b4851 100644 --- a/README.md +++ b/README.md @@ -21,25 +21,26 @@ ## How to use -

After running this program a window is popped up which asks user to enter the product.

+

After running this program a window is popped up which asks user to enter the product.

-

After entering the product and clicking on the 'Find' button it will take us to another window which will show the title of the product on both sites and there corresponding prices.

+

After entering the product and clicking on the 'Find' button it will take us to another window which will show the title of the product on both sites and there corresponding prices.

-

If you didn't get the desired product then click on the title to get suggestions related to your search.

+

If you didn't get the desired product then click on the title to get suggestions related to your search.

-Select the desired product from the suggestions for both sites and then click on the 'Search' button to get their corresponding prices - -![screenshot 9](https://user-images.githubusercontent.com/40419750/42381782-cbcb1bd8-814f-11e8-92c2-245ed3f2dc5d.png) +

Select the desired product from the suggestions for both sites and then click on the 'Search' button to get their corresponding prices.

+

+ +

## Highlights: 1. You can try out suggestions to find other products related to your search just by clicking on the title of the product. From 4eedb1a21d0d1a5fa2db5a96784bcfd503997969 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 18:48:23 +0530 Subject: [PATCH 12/45] Update README.md --- README.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/README.md b/README.md index d2b4851..3114ea7 100644 --- a/README.md +++ b/README.md @@ -21,23 +21,23 @@ ## How to use -

After running this program a window is popped up which asks user to enter the product.

+

After running this program a window is popped up which asks user to enter the product.

-

After entering the product and clicking on the 'Find' button it will take us to another window which will show the title of the product on both sites and there corresponding prices.

+

After entering the product and clicking on the 'Find' button it will take us to another window which will show the title of the product on both sites and there corresponding prices.

-

If you didn't get the desired product then click on the title to get suggestions related to your search.

+

If you didn't get the desired product then click on the title to get suggestions related to your search.

-

+

-

Select the desired product from the suggestions for both sites and then click on the 'Search' button to get their corresponding prices.

+

Select the desired product from the suggestions for both sites and then click on the 'Search' button to get their corresponding prices.

From 3d64ba98f133d0058a386eab2719a7cfbae5b221 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 19:22:09 +0530 Subject: [PATCH 13/45] Update README.md --- README.md | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/README.md b/README.md index 3114ea7..e3b8c6a 100644 --- a/README.md +++ b/README.md @@ -51,6 +51,11 @@ To avoid 'Product not found', try searching the basic model and then select the For Example: Instead of searching 'Apple iPhone X (Space Grey, 256GB)', search 'Apple iPhone X' and then select the desired specifications from suggestions. + + +## Future Scope: +Currently this program works only on two e-commerce sites. More websites can be added to it. If you have more ideas, I'm excited to view Pull Requests from your side! + For more information, visit my website From eeca6d6b9f37d1f5d1d32adc07f81a09b1518a8d Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 19:22:51 +0530 Subject: [PATCH 14/45] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index e3b8c6a..b9aa530 100644 --- a/README.md +++ b/README.md @@ -56,7 +56,7 @@ For Example: Instead of searching 'Apple iPhone X (Space Grey, 256GB)', search ' ## Future Scope: Currently this program works only on two e-commerce sites. More websites can be added to it. If you have more ideas, I'm excited to view Pull Requests from your side! -For more information, visit my website +For more information, visit my website. From 1797818750ecb0ce770c18482dcf6c52d655b251 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 19:24:39 +0530 Subject: [PATCH 15/45] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index b9aa530..9bd1e36 100644 --- a/README.md +++ b/README.md @@ -56,7 +56,7 @@ For Example: Instead of searching 'Apple iPhone X (Space Grey, 256GB)', search ' ## Future Scope: Currently this program works only on two e-commerce sites. More websites can be added to it. If you have more ideas, I'm excited to view Pull Requests from your side! -For more information, visit my website. +For more information, you can visit my website. From 95c486e62407eccc00d676faf2099bd0904389bd Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 20:02:14 +0530 Subject: [PATCH 16/45] Create FUNDING.yml --- .github/FUNDING.yml | 12 ++++++++++++ 1 file changed, 12 insertions(+) create mode 100644 .github/FUNDING.yml diff --git a/.github/FUNDING.yml b/.github/FUNDING.yml new file mode 100644 index 0000000..4b16f59 --- /dev/null +++ b/.github/FUNDING.yml @@ -0,0 +1,12 @@ +# These are supported funding model platforms + +github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2] +patreon: # Replace with a single Patreon username +open_collective: # Replace with a single Open Collective username +ko_fi: # Replace with a single Ko-fi username +tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel +community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry +liberapay: # Replace with a single Liberapay username +issuehunt: # Replace with a single IssueHunt username +otechie: # Replace with a single Otechie username +custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2'] From 5d8257c88358a0f244a0a8bdcd2af795f10ee622 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 4 Aug 2019 20:03:13 +0530 Subject: [PATCH 17/45] Delete FUNDING.yml --- .github/FUNDING.yml | 12 ------------ 1 file changed, 12 deletions(-) delete mode 100644 .github/FUNDING.yml diff --git a/.github/FUNDING.yml b/.github/FUNDING.yml deleted file mode 100644 index 4b16f59..0000000 --- a/.github/FUNDING.yml +++ /dev/null @@ -1,12 +0,0 @@ -# These are supported funding model platforms - -github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2] -patreon: # Replace with a single Patreon username -open_collective: # Replace with a single Open Collective username -ko_fi: # Replace with a single Ko-fi username -tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel -community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry -liberapay: # Replace with a single Liberapay username -issuehunt: # Replace with a single IssueHunt username -otechie: # Replace with a single Otechie username -custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2'] From 653dce75fedfa8910f1bfa0410d4965ce340a4ac Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Tue, 6 Aug 2019 19:12:18 +0530 Subject: [PATCH 18/45] Rename Price comparison engine.py to Price_comparison_engine.py --- Price comparison engine.py => Price_comparison_engine.py | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename Price comparison engine.py => Price_comparison_engine.py (100%) diff --git a/Price comparison engine.py b/Price_comparison_engine.py similarity index 100% rename from Price comparison engine.py rename to Price_comparison_engine.py From 497054341336489d3eeb60bd64d7a72440e2d744 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Tue, 6 Aug 2019 19:16:36 +0530 Subject: [PATCH 19/45] Rename Price_comparison_engine.py to price_comparison_engine.py --- Price_comparison_engine.py => price_comparison_engine.py | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename Price_comparison_engine.py => price_comparison_engine.py (100%) diff --git a/Price_comparison_engine.py b/price_comparison_engine.py similarity index 100% rename from Price_comparison_engine.py rename to price_comparison_engine.py From 7d3c01d1e12d9cfd922332ee26f8c35198faf806 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 27 Oct 2019 00:39:17 +0530 Subject: [PATCH 20/45] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 9bd1e36..ff8bb33 100644 --- a/README.md +++ b/README.md @@ -15,7 +15,7 @@ -

Compares price of the product entered by an user from e-commerce sites Amazon and Flipkart.

+

Compares price of the product entered by the user from e-commerce sites Amazon and Flipkart.


From f9f4a9a2ec64998b28eb36dd97ac3cb13042d3f8 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 5 Apr 2020 12:28:45 +0530 Subject: [PATCH 21/45] Update README.md --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index ff8bb33..7ada9d8 100644 --- a/README.md +++ b/README.md @@ -11,6 +11,7 @@ + From f5a65264994494d922533b0f8e5534e0c6d0ffa7 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Sun, 5 Apr 2020 12:30:00 +0530 Subject: [PATCH 22/45] Update README.md --- README.md | 1 - 1 file changed, 1 deletion(-) diff --git a/README.md b/README.md index 7ada9d8..ff8bb33 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,6 @@ - From 00711e2044db1c74898cb8751b34cf0ed6e27593 Mon Sep 17 00:00:00 2001 From: Sushant Date: Mon, 24 Aug 2020 10:23:53 +0530 Subject: [PATCH 23/45] fixed tags for amazon --- price_comparison_engine.py | 6 ++++-- test.py | 0 2 files changed, 4 insertions(+), 2 deletions(-) create mode 100644 test.py diff --git a/price_comparison_engine.py b/price_comparison_engine.py index 8f740d8..da11d90 100644 --- a/price_comparison_engine.py +++ b/price_comparison_engine.py @@ -144,13 +144,15 @@ def price_amzn(self,key): source_code = requests.get(url_amzn, headers=self.headers) plain_text = source_code.text self.soup = BeautifulSoup(plain_text, "html.parser") - for titles in self.soup.find_all('a', {'class': 'a-link-normal a-text-normal'}): + for titles in self.soup.find_all('span',{'class':'a-size-medium a-color-base a-text-normal'}): try: - self.title_arr.append(titles.img.get('alt')) + self.title_arr.append(titles.text) except AttributeError: + print('hi') continue # Getting closest match of the input from user in titles + print('title',self.title_arr) user_input = self.var.get().title() self.matches_amzn = get_close_matches(user_input, self.title_arr, 20, 0.01) self.opt_title.set(self.matches_amzn[0]) diff --git a/test.py b/test.py new file mode 100644 index 0000000..e69de29 From e6e21ad3e4a8ed824e3ab710191e64ea8af65805 Mon Sep 17 00:00:00 2001 From: Sushant Date: Mon, 24 Aug 2020 12:49:55 +0530 Subject: [PATCH 24/45] fixed price_amzn function --- price_comparison_engine.py | 44 +++++++++++++++++--------------------- 1 file changed, 20 insertions(+), 24 deletions(-) diff --git a/price_comparison_engine.py b/price_comparison_engine.py index da11d90..3ffdba8 100644 --- a/price_comparison_engine.py +++ b/price_comparison_engine.py @@ -3,6 +3,7 @@ import requests from difflib import get_close_matches import webbrowser +from collections import defaultdict root = Tk() class Price_compare: @@ -139,36 +140,31 @@ def price_amzn(self,key): # Getting titles of all products on that page - self.title_arr = [] - self.opt_title = StringVar() + map = defaultdict(list) + home = 'https://www.amazon.in' source_code = requests.get(url_amzn, headers=self.headers) plain_text = source_code.text + self.opt_title = StringVar() self.soup = BeautifulSoup(plain_text, "html.parser") - for titles in self.soup.find_all('span',{'class':'a-size-medium a-color-base a-text-normal'}): - try: - self.title_arr.append(titles.text) - except AttributeError: - print('hi') - continue - - # Getting closest match of the input from user in titles - print('title',self.title_arr) + for html in self.soup.find_all('div', { + 'class': 'sg-col-4-of-12 sg-col-8-of-16 sg-col-16-of-24 sg-col-12-of-20 sg-col-24-of-32 sg-col sg-col-28-of-36 sg-col-20-of-28'}): + title, price, link = None, 'Currently Unavailable', None + for heading in html.find_all('span', {'class': 'a-size-medium a-color-base a-text-normal'}): + title = heading.text + for p in html.find_all('span', {'class': 'a-price-whole'}): + price = p.text + for l in html.find_all('a', {'class': 'a-link-normal a-text-normal'}): + link = home + l.get('href') + map[title] = [price, link] user_input = self.var.get().title() - self.matches_amzn = get_close_matches(user_input, self.title_arr, 20, 0.01) + self.matches_amzn = get_close_matches(user_input, list(map.keys()), 20, 0.01) + self.looktable = {} + for title in self.matches_amzn: + self.looktable[title] = map[title] self.opt_title.set(self.matches_amzn[0]) - product_block = self.soup.find(attrs= {'title': self.opt_title.get()}) - self.product_link = product_block.get('href') - product_source_code = requests.get(self.product_link, headers=headers) - product_plain_text = product_source_code.text - product_soup = BeautifulSoup(product_plain_text, "html.parser") - try: - for price in product_soup.find(attrs={'id': 'priceblock_ourprice'}): + self.var_amzn.set(self.looktable[self.matches_amzn[0]][0]) + self.product_link = self.looktable[self.matches_amzn[0]][1] - self.var_amzn.set(price) - self.title_amzn_var.set(self.matches_amzn[0]) - except TypeError: - self.var_amzn.set('None') - self.title_amzn_var.set('product not available') def search(self): amzn_get = self.variable_amzn.get() From 19c1a48b0bbe9f47cd03d49471bd196d8e9dd1aa Mon Sep 17 00:00:00 2001 From: Sushant Date: Mon, 24 Aug 2020 12:57:02 +0530 Subject: [PATCH 25/45] fixed function search --- price_comparison_engine.py | 17 ++++------------- 1 file changed, 4 insertions(+), 13 deletions(-) diff --git a/price_comparison_engine.py b/price_comparison_engine.py index 3ffdba8..bd217d6 100644 --- a/price_comparison_engine.py +++ b/price_comparison_engine.py @@ -162,25 +162,16 @@ def price_amzn(self,key): for title in self.matches_amzn: self.looktable[title] = map[title] self.opt_title.set(self.matches_amzn[0]) - self.var_amzn.set(self.looktable[self.matches_amzn[0]][0]) + self.var_amzn.set(self.looktable[self.matches_amzn[0]][0]+'.00') self.product_link = self.looktable[self.matches_amzn[0]][1] def search(self): amzn_get = self.variable_amzn.get() self.opt_title.set(amzn_get) - product_block = self.soup.find(attrs={'title': self.opt_title.get()}) - self.product_link = product_block.get('href') - product_source_code = requests.get(self.product_link, headers=self.headers) - product_plain_text = product_source_code.text - product_soup = BeautifulSoup(product_plain_text, "html.parser") - try: - for price in product_soup.find(attrs={'id': 'priceblock_ourprice'}): - self.var_amzn.set(price) - except TypeError: - self.var_amzn.set('None') - self.title_amzn_var.set('product not available') - + product = self.opt_title.get() + price,self.product_link = self.looktable[product][0],self.looktable[product][1] + self.var_amzn.set(price+'.00') flip_get = self.variable_flip.get() self.opt_title_flip.set(flip_get) From 9be83b7a267c0c68003346657d9780f33fee2704 Mon Sep 17 00:00:00 2001 From: Sushant Date: Mon, 24 Aug 2020 20:35:48 +0530 Subject: [PATCH 26/45] reformatted code --- price_comparison_engine.py | 64 +++++++++++++++++++------------------- test.py | 0 2 files changed, 32 insertions(+), 32 deletions(-) delete mode 100644 test.py diff --git a/price_comparison_engine.py b/price_comparison_engine.py index bd217d6..a5d2ddc 100644 --- a/price_comparison_engine.py +++ b/price_comparison_engine.py @@ -6,9 +6,11 @@ from collections import defaultdict root = Tk() + + class Price_compare: - def __init__(self,master): + def __init__(self, master): self.var = StringVar() self.var_ebay = StringVar() self.var_flipkart = StringVar() @@ -20,7 +22,7 @@ def __init__(self,master): entry = Entry(master, textvariable=self.var) entry.grid(row=0, column=1) - button_find = Button(master, text='Find', bd=4,command=self.find) + button_find = Button(master, text='Find', bd=4, command=self.find) button_find.grid(row=1, column=1, sticky=W, pady=8) def find(self): @@ -33,37 +35,33 @@ def find(self): self.variable_amzn = StringVar() self.variable_flip = StringVar() - for word in self.product_arr: - if self.n == 1 : + if self.n == 1: self.key = self.key + str(word) self.n += 1 - + else: self.key = self.key + '+' + str(word) - - self.window =Toplevel(root) + self.window = Toplevel(root) self.window.title('Price Comparison Engine') - label_title_flip = Label(self.window, text= 'Flipkart Title:') - label_title_flip.grid(row=0,column=0,sticky=W) - + label_title_flip = Label(self.window, text='Flipkart Title:') + label_title_flip.grid(row=0, column=0, sticky=W) label_flipkart = Label(self.window, text='Flipkart price (Rs):') label_flipkart.grid(row=1, column=0, sticky=W) entry_flipkart = Entry(self.window, textvariable=self.var_flipkart) - entry_flipkart.grid(row=1, column=1,sticky=W) + entry_flipkart.grid(row=1, column=1, sticky=W) label_title_amzn = Label(self.window, text='Amazon Title:') label_title_amzn.grid(row=3, column=0, sticky=W) - label_amzn = Label(self.window, text='Amazon price (Rs):') label_amzn.grid(row=4, column=0, sticky=W) entry_amzn = Entry(self.window, textvariable=self.var_amzn) - entry_amzn.grid(row=4, column=1,sticky=W) + entry_amzn.grid(row=4, column=1, sticky=W) self.price_flipkart(self.key) self.price_amzn(self.key) @@ -78,31 +76,32 @@ def find(self): self.variable_flip.set('Product not available') option_amzn = OptionMenu(self.window, self.variable_amzn, *self.matches_amzn) - option_amzn.grid(row=3,column=1,sticky=W) + option_amzn.grid(row=3, column=1, sticky=W) lab_amz = Label(self.window, text='Not this? Try out suggestions by clicking on the title') - lab_amz.grid(row=3,column=2,padx=4) + lab_amz.grid(row=3, column=2, padx=4) option_flip = OptionMenu(self.window, self.variable_flip, *self.matches_flip) option_flip.grid(row=0, column=1, sticky=W) lab_flip = Label(self.window, text='Not this? Try out suggestions by clicking on the title') - lab_flip.grid(row=0,column=2,padx=4) + lab_flip.grid(row=0, column=2, padx=4) - button_search = Button(self.window, text='Search',command=self.search,bd=4) - button_search.grid(row=2, column=2, sticky=E,padx=10, pady=4) + button_search = Button(self.window, text='Search', command=self.search, bd=4) + button_search.grid(row=2, column=2, sticky=E, padx=10, pady=4) - button_amzn_visit = Button(self.window, text='Visit Site', command=self.visit_amzn,bd=4) - button_amzn_visit.grid(row=4,column=2,sticky=W) + button_amzn_visit = Button(self.window, text='Visit Site', command=self.visit_amzn, bd=4) + button_amzn_visit.grid(row=4, column=2, sticky=W) - button_flip_visit = Button(self.window, text='Visit Site', command= self.visit_flip,bd=4) + button_flip_visit = Button(self.window, text='Visit Site', command=self.visit_flip, bd=4) button_flip_visit.grid(row=1, column=2, sticky=W) + def price_flipkart(self, key): + url_flip = 'https://www.flipkart.com/search?q=' + str( + key) + '&marketplace=FLIPKART&otracker=start&as-show=on&as=off' - def price_flipkart(self,key): - url_flip = 'https://www.flipkart.com/search?q=' + str(key) + '&marketplace=FLIPKART&otracker=start&as-show=on&as=off' - - self.headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'} + self.headers = { + 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'} title_arr = [] self.opt_title_flip = StringVar() source_code = requests.get(url_flip, headers=self.headers) @@ -122,7 +121,7 @@ def price_flipkart(self,key): for div in self.soup_flip.find_all('a', {'class': '_31qSD5'}): for each in div.find_all('div', {'class': '_3wU53n'}): if each.text == self.opt_title_flip.get(): - self.link_flip ='https://www.flipkart.com' + div.get('href') + self.link_flip = 'https://www.flipkart.com' + div.get('href') product_source_code = requests.get(self.link_flip, headers=self.headers) product_plain_text = product_source_code.text @@ -132,11 +131,12 @@ def price_flipkart(self,key): except UnboundLocalError: pass - def price_amzn(self,key): + def price_amzn(self, key): url_amzn = 'https://www.amazon.in/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=' + str(key) # Faking the visit from a browser - headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'} + headers = { + 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'} # Getting titles of all products on that page @@ -162,16 +162,15 @@ def price_amzn(self,key): for title in self.matches_amzn: self.looktable[title] = map[title] self.opt_title.set(self.matches_amzn[0]) - self.var_amzn.set(self.looktable[self.matches_amzn[0]][0]+'.00') + self.var_amzn.set(self.looktable[self.matches_amzn[0]][0] + '.00') self.product_link = self.looktable[self.matches_amzn[0]][1] - def search(self): amzn_get = self.variable_amzn.get() self.opt_title.set(amzn_get) product = self.opt_title.get() - price,self.product_link = self.looktable[product][0],self.looktable[product][1] - self.var_amzn.set(price+'.00') + price, self.product_link = self.looktable[product][0], self.looktable[product][1] + self.var_amzn.set(price + '.00') flip_get = self.variable_flip.get() self.opt_title_flip.set(flip_get) @@ -195,6 +194,7 @@ def visit_amzn(self): def visit_flip(self): webbrowser.open(self.link_flip) + c = Price_compare(root) root.title('Price Comparison Engine') root.mainloop() diff --git a/test.py b/test.py deleted file mode 100644 index e69de29..0000000 From 74d9f3dff431faf73d06cbd6cf3bccecf596f6aa Mon Sep 17 00:00:00 2001 From: Sushant Date: Mon, 24 Aug 2020 20:36:30 +0530 Subject: [PATCH 27/45] added requirements.txt --- requirements.txt | 43 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 43 insertions(+) create mode 100644 requirements.txt diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000..8dddd49 --- /dev/null +++ b/requirements.txt @@ -0,0 +1,43 @@ +altgraph==0.16.1 +astroid==2.3.3 +auto-py-to-exe==2.6.6 +bottle==0.12.13 +bottle-websocket==0.2.9 +certifi==2018.4.16 +cffi==1.11.5 +chardet==3.0.4 +colorama==0.4.3 +cx-Freeze==5.1.1 +cycler==0.10.0 +Eel==0.9.10 +future==0.16.0 +gevent==1.3.5 +gevent-websocket==0.10.1 +greenlet==0.4.14 +idna==2.6 +isort==4.3.21 +kiwisolver==1.0.1 +lazy-object-proxy==1.4.3 +macholib==1.10 +matplotlib==2.2.2 +mccabe==0.6.1 +numpy==1.15.4 +pefile==2017.11.5 +protobuf==3.6.1 +pycparser==2.18 +pyinstaller==3.6 +pylint==2.4.4 +pyparsing==2.2.0 +pypiwin32==223 +pyqtgraph==0.10.0 +python-dateutil==2.7.3 +pytz==2018.4 +pywin32==223 +pywin32-ctypes==0.2.0 +requests==2.18.4 +six==1.14.0 +tensorflow==1.0.0 +typed-ast==1.4.1 +urllib3==1.22 +whichcraft==0.4.1 +wrapt==1.11.2 From b0d7ca53709195580e4772210608010ca6be7b86 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 24 Aug 2020 15:07:40 +0000 Subject: [PATCH 28/45] Bump requests from 2.18.4 to 2.20.0 Bumps [requests](https://github.com/psf/requests) from 2.18.4 to 2.20.0. - [Release notes](https://github.com/psf/requests/releases) - [Changelog](https://github.com/psf/requests/blob/master/HISTORY.md) - [Commits](https://github.com/psf/requests/compare/v2.18.4...v2.20.0) Signed-off-by: dependabot[bot] --- requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/requirements.txt b/requirements.txt index 8dddd49..439079e 100644 --- a/requirements.txt +++ b/requirements.txt @@ -34,7 +34,7 @@ python-dateutil==2.7.3 pytz==2018.4 pywin32==223 pywin32-ctypes==0.2.0 -requests==2.18.4 +requests==2.20.0 six==1.14.0 tensorflow==1.0.0 typed-ast==1.4.1 From b006bf99b7bacfcbab2c7d6bff54fdf20d52c6de Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 24 Aug 2020 15:07:41 +0000 Subject: [PATCH 29/45] Bump urllib3 from 1.22 to 1.24.2 Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.22 to 1.24.2. - [Release notes](https://github.com/urllib3/urllib3/releases) - [Changelog](https://github.com/urllib3/urllib3/blob/master/CHANGES.rst) - [Commits](https://github.com/urllib3/urllib3/compare/1.22...1.24.2) Signed-off-by: dependabot[bot] --- requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/requirements.txt b/requirements.txt index 8dddd49..a9d7c13 100644 --- a/requirements.txt +++ b/requirements.txt @@ -38,6 +38,6 @@ requests==2.18.4 six==1.14.0 tensorflow==1.0.0 typed-ast==1.4.1 -urllib3==1.22 +urllib3==1.24.2 whichcraft==0.4.1 wrapt==1.11.2 From fb5e635c44fdaf548b1570e904b989892cd0a061 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Mon, 24 Aug 2020 20:39:11 +0530 Subject: [PATCH 30/45] Create SECURITY.md --- SECURITY.md | 21 +++++++++++++++++++++ 1 file changed, 21 insertions(+) create mode 100644 SECURITY.md diff --git a/SECURITY.md b/SECURITY.md new file mode 100644 index 0000000..034e848 --- /dev/null +++ b/SECURITY.md @@ -0,0 +1,21 @@ +# Security Policy + +## Supported Versions + +Use this section to tell people about which versions of your project are +currently being supported with security updates. + +| Version | Supported | +| ------- | ------------------ | +| 5.1.x | :white_check_mark: | +| 5.0.x | :x: | +| 4.0.x | :white_check_mark: | +| < 4.0 | :x: | + +## Reporting a Vulnerability + +Use this section to tell people how to report a vulnerability. + +Tell them where to go, how often they can expect to get an update on a +reported vulnerability, what to expect if the vulnerability is accepted or +declined, etc. From 445f21f854db488f2a6b8a4c09ae0fc1dc74afd3 Mon Sep 17 00:00:00 2001 From: Sushant Patrikar Date: Mon, 24 Aug 2020 20:39:40 +0530 Subject: [PATCH 31/45] Delete SECURITY.md --- SECURITY.md | 21 --------------------- 1 file changed, 21 deletions(-) delete mode 100644 SECURITY.md diff --git a/SECURITY.md b/SECURITY.md deleted file mode 100644 index 034e848..0000000 --- a/SECURITY.md +++ /dev/null @@ -1,21 +0,0 @@ -# Security Policy - -## Supported Versions - -Use this section to tell people about which versions of your project are -currently being supported with security updates. - -| Version | Supported | -| ------- | ------------------ | -| 5.1.x | :white_check_mark: | -| 5.0.x | :x: | -| 4.0.x | :white_check_mark: | -| < 4.0 | :x: | - -## Reporting a Vulnerability - -Use this section to tell people how to report a vulnerability. - -Tell them where to go, how often they can expect to get an update on a -reported vulnerability, what to expect if the vulnerability is accepted or -declined, etc. From b67660485ece90991b1fcf18bcb50f07bfc434b2 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 24 Aug 2020 15:10:54 +0000 Subject: [PATCH 32/45] Bump tensorflow from 1.0.0 to 1.15.2 Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 1.0.0 to 1.15.2. - [Release notes](https://github.com/tensorflow/tensorflow/releases) - [Changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md) - [Commits](https://github.com/tensorflow/tensorflow/compare/v1.0.0...v1.15.2) Signed-off-by: dependabot[bot] --- requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/requirements.txt b/requirements.txt index db36080..9c8e5fe 100644 --- a/requirements.txt +++ b/requirements.txt @@ -36,7 +36,7 @@ pywin32==223 pywin32-ctypes==0.2.0 requests==2.20.0 six==1.14.0 -tensorflow==1.0.0 +tensorflow==1.15.2 typed-ast==1.4.1 urllib3==1.24.2 whichcraft==0.4.1 From 9a96843665c6f10e6c6e98bb0659fd7eadac42f0 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 13 Nov 2020 19:04:33 +0000 Subject: [PATCH 33/45] Bump tensorflow from 1.15.2 to 2.3.1 Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 1.15.2 to 2.3.1. - [Release notes](https://github.com/tensorflow/tensorflow/releases) - [Changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md) - [Commits](https://github.com/tensorflow/tensorflow/compare/v1.15.2...v2.3.1) Signed-off-by: dependabot[bot] --- requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/requirements.txt b/requirements.txt index 9c8e5fe..37e5a09 100644 --- a/requirements.txt +++ b/requirements.txt @@ -36,7 +36,7 @@ pywin32==223 pywin32-ctypes==0.2.0 requests==2.20.0 six==1.14.0 -tensorflow==1.15.2 +tensorflow==2.3.1 typed-ast==1.4.1 urllib3==1.24.2 whichcraft==0.4.1 From 752c49dba2f9f40d8dafbaa64d39bd6e7ab7e97d Mon Sep 17 00:00:00 2001 From: Sushant Date: Thu, 26 Nov 2020 14:50:19 +0530 Subject: [PATCH 34/45] fixed some bugs --- price_comparison_engine.py | 58 +++++++++++++++----------------------- 1 file changed, 23 insertions(+), 35 deletions(-) diff --git a/price_comparison_engine.py b/price_comparison_engine.py index a5d2ddc..24203e3 100644 --- a/price_comparison_engine.py +++ b/price_comparison_engine.py @@ -99,37 +99,38 @@ def find(self): def price_flipkart(self, key): url_flip = 'https://www.flipkart.com/search?q=' + str( key) + '&marketplace=FLIPKART&otracker=start&as-show=on&as=off' + map = defaultdict(list) self.headers = { 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'} - title_arr = [] - self.opt_title_flip = StringVar() source_code = requests.get(url_flip, headers=self.headers) - plain_text = source_code.text - self.soup_flip = BeautifulSoup(plain_text, "html.parser") - for title in self.soup_flip.find_all('div', {'class': '_3wU53n'}): - title_arr.append(title.text) + soup = BeautifulSoup(source_code.text, "html.parser") + self.opt_title_flip = StringVar() + home = 'https://www.flipkart.com' + for block in soup.find_all('div', {'class': '_2kHMtA'}): + title, price, link = None, 'Currently Unavailable', None + for heading in block.find_all('div', {'class': '_4rR01T'}): + title = heading.text + for p in block.find_all('div', {'class': '_30jeq3 _1_WHN1'}): + price = p.text[1:] + for l in block.find_all('a', {'class': '_1fQZEK'}): + link = home + l.get('href') + map[title] = [price,link] user_input = self.var.get().title() + self.matches_flip = get_close_matches(user_input, map.keys(), 20, 0.1) + self.looktable_flip = {} + for title in self.matches_flip: + self.looktable_flip[title] = map[title] + + - self.matches_flip = get_close_matches(user_input, title_arr, 20, 0.1) try: self.opt_title_flip.set(self.matches_flip[0]) + self.var_flipkart.set(self.looktable_flip[self.matches_flip[0]][0] + '.00') + self.link_flip = self.looktable_flip[self.matches_flip[0]][1] except IndexError: self.opt_title_flip.set('Product not found') - try: - for div in self.soup_flip.find_all('a', {'class': '_31qSD5'}): - for each in div.find_all('div', {'class': '_3wU53n'}): - if each.text == self.opt_title_flip.get(): - self.link_flip = 'https://www.flipkart.com' + div.get('href') - - product_source_code = requests.get(self.link_flip, headers=self.headers) - product_plain_text = product_source_code.text - product_soup = BeautifulSoup(product_plain_text, "html.parser") - for price in product_soup.find_all('div', {'class': '_1vC4OE _3qQ9m1'}): - self.var_flipkart.set(price.text[1:] + '.00') - except UnboundLocalError: - pass def price_amzn(self, key): url_amzn = 'https://www.amazon.in/s/ref=nb_sb_noss_2?url=search-alias%3Daps&field-keywords=' + str(key) @@ -172,21 +173,8 @@ def search(self): price, self.product_link = self.looktable[product][0], self.looktable[product][1] self.var_amzn.set(price + '.00') flip_get = self.variable_flip.get() - self.opt_title_flip.set(flip_get) - - try: - for div in self.soup_flip.find_all('a', {'class': '_31qSD5'}): - for each in div.find_all('div', {'class': '_3wU53n'}): - if each.text == self.opt_title_flip.get(): - self.link_flip = 'https://www.flipkart.com' + div.get('href') - - product_source_code = requests.get(self.link_flip, headers=self.headers) - product_plain_text = product_source_code.text - product_soup = BeautifulSoup(product_plain_text, "html.parser") - for price in product_soup.find_all('div', {'class': '_1vC4OE _3qQ9m1'}): - self.var_flipkart.set(price.text[1:] + '.00') - except UnboundLocalError: - pass + flip_price, self.link_flip = self.looktable_flip[flip_get][0],self.looktable_flip[flip_get][1] + self.var_flipkart.set(flip_price + '.00') def visit_amzn(self): webbrowser.open(self.product_link) From 32bdae9e48075ce5d0b3fef7f873dde98110cb24 Mon Sep 17 00:00:00 2001 From: Sushant Date: Thu, 26 Nov 2020 14:57:41 +0530 Subject: [PATCH 35/45] fixed some bugs --- price_comparison_engine.py | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/price_comparison_engine.py b/price_comparison_engine.py index 24203e3..de377c5 100644 --- a/price_comparison_engine.py +++ b/price_comparison_engine.py @@ -115,7 +115,7 @@ def price_flipkart(self, key): price = p.text[1:] for l in block.find_all('a', {'class': '_1fQZEK'}): link = home + l.get('href') - map[title] = [price,link] + map[title] = [price, link] user_input = self.var.get().title() self.matches_flip = get_close_matches(user_input, map.keys(), 20, 0.1) @@ -123,8 +123,6 @@ def price_flipkart(self, key): for title in self.matches_flip: self.looktable_flip[title] = map[title] - - try: self.opt_title_flip.set(self.matches_flip[0]) self.var_flipkart.set(self.looktable_flip[self.matches_flip[0]][0] + '.00') @@ -173,7 +171,7 @@ def search(self): price, self.product_link = self.looktable[product][0], self.looktable[product][1] self.var_amzn.set(price + '.00') flip_get = self.variable_flip.get() - flip_price, self.link_flip = self.looktable_flip[flip_get][0],self.looktable_flip[flip_get][1] + flip_price, self.link_flip = self.looktable_flip[flip_get][0], self.looktable_flip[flip_get][1] self.var_flipkart.set(flip_price + '.00') def visit_amzn(self): From 61f90d956550c9bbef7b62e6a5df2518031eb48d Mon Sep 17 00:00:00 2001 From: Sushant Date: Fri, 8 Jan 2021 23:03:20 +0530 Subject: [PATCH 36/45] fixed amazon issue --- price_comparison_engine.py | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/price_comparison_engine.py b/price_comparison_engine.py index de377c5..6ed0649 100644 --- a/price_comparison_engine.py +++ b/price_comparison_engine.py @@ -137,24 +137,22 @@ def price_amzn(self, key): headers = { 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'} - # Getting titles of all products on that page - map = defaultdict(list) home = 'https://www.amazon.in' source_code = requests.get(url_amzn, headers=self.headers) plain_text = source_code.text self.opt_title = StringVar() self.soup = BeautifulSoup(plain_text, "html.parser") - for html in self.soup.find_all('div', { - 'class': 'sg-col-4-of-12 sg-col-8-of-16 sg-col-16-of-24 sg-col-12-of-20 sg-col-24-of-32 sg-col sg-col-28-of-36 sg-col-20-of-28'}): - title, price, link = None, 'Currently Unavailable', None + for html in self.soup.find_all('div', {'class': 'sg-col-inner'}): + title, link = None, None for heading in html.find_all('span', {'class': 'a-size-medium a-color-base a-text-normal'}): title = heading.text for p in html.find_all('span', {'class': 'a-price-whole'}): price = p.text for l in html.find_all('a', {'class': 'a-link-normal a-text-normal'}): link = home + l.get('href') - map[title] = [price, link] + if title and link: + map[title] = [price, link] user_input = self.var.get().title() self.matches_amzn = get_close_matches(user_input, list(map.keys()), 20, 0.01) self.looktable = {} From 1becea81a4f082898cdf51bf735459f592476602 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Fri, 8 Jan 2021 17:35:59 +0000 Subject: [PATCH 37/45] Bump tensorflow from 2.3.1 to 2.4.0 Bumps [tensorflow](https://github.com/tensorflow/tensorflow) from 2.3.1 to 2.4.0. - [Release notes](https://github.com/tensorflow/tensorflow/releases) - [Changelog](https://github.com/tensorflow/tensorflow/blob/master/RELEASE.md) - [Commits](https://github.com/tensorflow/tensorflow/compare/v2.3.1...v2.4.0) Signed-off-by: dependabot[bot] --- requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/requirements.txt b/requirements.txt index 37e5a09..7acc791 100644 --- a/requirements.txt +++ b/requirements.txt @@ -36,7 +36,7 @@ pywin32==223 pywin32-ctypes==0.2.0 requests==2.20.0 six==1.14.0 -tensorflow==2.3.1 +tensorflow==2.4.0 typed-ast==1.4.1 urllib3==1.24.2 whichcraft==0.4.1 From 1e98f62b2c94a4b9ceb3d2de04bd24b42f02bf9d Mon Sep 17 00:00:00 2001 From: Pranav Suryawanshi <65610577+Pranav-Code-007@users.noreply.github.com> Date: Sat, 23 Jan 2021 21:19:44 +0530 Subject: [PATCH 38/45] Update price_comparison_engine.py --- price_comparison_engine.py | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/price_comparison_engine.py b/price_comparison_engine.py index 6ed0649..72565d7 100644 --- a/price_comparison_engine.py +++ b/price_comparison_engine.py @@ -1,3 +1,5 @@ + + from tkinter import * from bs4 import BeautifulSoup import requests @@ -6,18 +8,19 @@ from collections import defaultdict root = Tk() - +root.geometry("320x150") class Price_compare: def __init__(self, master): + self.var = StringVar() self.var_ebay = StringVar() self.var_flipkart = StringVar() self.var_amzn = StringVar() label = Label(master, text='Enter the product') - label.grid(row=0, column=0) + label.grid(row=0, column=0,padx=(30,10),pady=30) entry = Entry(master, textvariable=self.var) entry.grid(row=0, column=1) @@ -178,7 +181,7 @@ def visit_amzn(self): def visit_flip(self): webbrowser.open(self.link_flip) - -c = Price_compare(root) -root.title('Price Comparison Engine') -root.mainloop() +if __name__ == "__main__": + c = Price_compare(root) + root.title('Price Comparison Engine') + root.mainloop() From b73bb881262881eabb3206b52495465f543691ff Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Wed, 7 Apr 2021 21:38:24 +0000 Subject: [PATCH 39/45] Bump bottle from 0.12.13 to 0.12.19 Bumps [bottle](https://github.com/bottlepy/bottle) from 0.12.13 to 0.12.19. - [Release notes](https://github.com/bottlepy/bottle/releases) - [Changelog](https://github.com/bottlepy/bottle/blob/master/docs/changelog.rst) - [Commits](https://github.com/bottlepy/bottle/compare/0.12.13...0.12.19) Signed-off-by: dependabot[bot] --- requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/requirements.txt b/requirements.txt index 7acc791..5666b5e 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,7 +1,7 @@ altgraph==0.16.1 astroid==2.3.3 auto-py-to-exe==2.6.6 -bottle==0.12.13 +bottle==0.12.19 bottle-websocket==0.2.9 certifi==2018.4.16 cffi==1.11.5 From 1eb431b3b5d83898c256c6cbf63f60477b58072e Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Wed, 2 Jun 2021 02:52:51 +0000 Subject: [PATCH 40/45] Bump urllib3 from 1.24.2 to 1.26.5 Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.24.2 to 1.26.5. - [Release notes](https://github.com/urllib3/urllib3/releases) - [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst) - [Commits](https://github.com/urllib3/urllib3/compare/1.24.2...1.26.5) --- updated-dependencies: - dependency-name: urllib3 dependency-type: direct:production ... Signed-off-by: dependabot[bot] --- requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/requirements.txt b/requirements.txt index 7acc791..fc725b2 100644 --- a/requirements.txt +++ b/requirements.txt @@ -38,6 +38,6 @@ requests==2.20.0 six==1.14.0 tensorflow==2.4.0 typed-ast==1.4.1 -urllib3==1.24.2 +urllib3==1.26.5 whichcraft==0.4.1 wrapt==1.11.2 From 08def0d851440a633de4c397325e666e8a4d7be0 Mon Sep 17 00:00:00 2001 From: "dependabot[bot]" <49699333+dependabot[bot]@users.noreply.github.com> Date: Mon, 9 Aug 2021 20:59:30 +0000 Subject: [PATCH 41/45] Bump pywin32 from 223 to 301 Bumps [pywin32](https://github.com/mhammond/pywin32) from 223 to 301. - [Release notes](https://github.com/mhammond/pywin32/releases) - [Changelog](https://github.com/mhammond/pywin32/blob/master/CHANGES.txt) - [Commits](https://github.com/mhammond/pywin32/commits) --- updated-dependencies: - dependency-name: pywin32 dependency-type: direct:production ... Signed-off-by: dependabot[bot] --- requirements.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/requirements.txt b/requirements.txt index 7acc791..0e009b5 100644 --- a/requirements.txt +++ b/requirements.txt @@ -32,7 +32,7 @@ pypiwin32==223 pyqtgraph==0.10.0 python-dateutil==2.7.3 pytz==2018.4 -pywin32==223 +pywin32==301 pywin32-ctypes==0.2.0 requests==2.20.0 six==1.14.0 From 639f6c61b3177e6d8eee4e57d8f0c3e9967af05d Mon Sep 17 00:00:00 2001 From: Sushant Date: Mon, 16 Aug 2021 00:51:31 +0530 Subject: [PATCH 42/45] added headers for amazon --- price_comparison_engine.py | 22 ++++++++++++++++++++-- 1 file changed, 20 insertions(+), 2 deletions(-) diff --git a/price_comparison_engine.py b/price_comparison_engine.py index 6ed0649..88d6081 100644 --- a/price_comparison_engine.py +++ b/price_comparison_engine.py @@ -4,6 +4,7 @@ from difflib import get_close_matches import webbrowser from collections import defaultdict +import random root = Tk() @@ -135,14 +136,30 @@ def price_amzn(self, key): # Faking the visit from a browser headers = { - 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'} + 'authority': 'www.amazon.com', + 'pragma': 'no-cache', + 'cache-control': 'no-cache', + 'dnt': '1', + 'upgrade-insecure-requests': '1', + 'user-agent': 'Mozilla/5.0 (X11; CrOS x86_64 8172.45.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.64 Safari/537.36', + 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', + 'sec-fetch-site': 'none', + 'sec-fetch-mode': 'navigate', + 'sec-fetch-dest': 'document', + 'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8', + } map = defaultdict(list) home = 'https://www.amazon.in' - source_code = requests.get(url_amzn, headers=self.headers) + proxies_list = ["128.199.109.241:8080", "113.53.230.195:3128", "125.141.200.53:80", "125.141.200.14:80", + "128.199.200.112:138", "149.56.123.99:3128", "128.199.200.112:80", "125.141.200.39:80", + "134.213.29.202:4444"] + proxies = {'https': random.choice(proxies_list)} + source_code = requests.get(url_amzn, headers=headers) plain_text = source_code.text self.opt_title = StringVar() self.soup = BeautifulSoup(plain_text, "html.parser") + # print(self.soup) for html in self.soup.find_all('div', {'class': 'sg-col-inner'}): title, link = None, None for heading in html.find_all('span', {'class': 'a-size-medium a-color-base a-text-normal'}): @@ -151,6 +168,7 @@ def price_amzn(self, key): price = p.text for l in html.find_all('a', {'class': 'a-link-normal a-text-normal'}): link = home + l.get('href') + # print(title,link,price) if title and link: map[title] = [price, link] user_input = self.var.get().title() From 8838eff0423fe5f2dce841ed89363e0821d856c9 Mon Sep 17 00:00:00 2001 From: Sushant Date: Mon, 16 Aug 2021 00:54:49 +0530 Subject: [PATCH 43/45] fixed amazon headers --- .idea/.gitignore | 2 + ...mazon-Flipkart-Price-Comparison-Engine.iml | 10 + .../inspectionProfiles/profiles_settings.xml | 6 + .idea/misc.xml | 7 + .idea/modules.xml | 8 + .idea/vcs.xml | 6 + .../beautifulsoup4-4.9.1.dist-info/AUTHORS | 49 + .../COPYING.txt | 27 + .../beautifulsoup4-4.9.1.dist-info/INSTALLER | 1 + .../beautifulsoup4-4.9.1.dist-info/LICENSE | 30 + .../beautifulsoup4-4.9.1.dist-info/METADATA | 131 + .../beautifulsoup4-4.9.1.dist-info/RECORD | 44 + .../beautifulsoup4-4.9.1.dist-info/WHEEL | 5 + .../top_level.txt | 1 + venv/Lib/site-packages/bs4/__init__.py | 777 ++ .../bs4/__pycache__/__init__.cpython-36.pyc | Bin 0 -> 22732 bytes .../bs4/__pycache__/dammit.cpython-36.pyc | Bin 0 -> 21926 bytes .../bs4/__pycache__/diagnose.cpython-36.pyc | Bin 0 -> 8405 bytes .../bs4/__pycache__/element.cpython-36.pyc | Bin 0 -> 62743 bytes .../bs4/__pycache__/formatter.cpython-36.pyc | Bin 0 -> 5421 bytes .../bs4/__pycache__/testing.cpython-36.pyc | Bin 0 -> 40206 bytes .../Lib/site-packages/bs4/builder/__init__.py | 520 + .../__pycache__/__init__.cpython-36.pyc | Bin 0 -> 15287 bytes .../__pycache__/_html5lib.cpython-36.pyc | Bin 0 -> 12385 bytes .../__pycache__/_htmlparser.cpython-36.pyc | Bin 0 -> 12934 bytes .../builder/__pycache__/_lxml.cpython-36.pyc | Bin 0 -> 9384 bytes .../site-packages/bs4/builder/_html5lib.py | 467 + .../site-packages/bs4/builder/_htmlparser.py | 477 + venv/Lib/site-packages/bs4/builder/_lxml.py | 332 + venv/Lib/site-packages/bs4/dammit.py | 939 ++ venv/Lib/site-packages/bs4/diagnose.py | 242 + venv/Lib/site-packages/bs4/element.py | 2162 +++++ venv/Lib/site-packages/bs4/formatter.py | 152 + venv/Lib/site-packages/bs4/testing.py | 1077 +++ venv/Lib/site-packages/bs4/tests/__init__.py | 1 + .../tests/__pycache__/__init__.cpython-36.pyc | Bin 0 -> 209 bytes .../test_builder_registry.cpython-36.pyc | Bin 0 -> 5031 bytes .../__pycache__/test_docs.cpython-36.pyc | Bin 0 -> 433 bytes .../__pycache__/test_html5lib.cpython-36.pyc | Bin 0 -> 7313 bytes .../test_htmlparser.cpython-36.pyc | Bin 0 -> 4016 bytes .../__pycache__/test_lxml.cpython-36.pyc | Bin 0 -> 3954 bytes .../__pycache__/test_soup.cpython-36.pyc | Bin 0 -> 28289 bytes .../__pycache__/test_tree.cpython-36.pyc | Bin 0 -> 94447 bytes .../bs4/tests/test_builder_registry.py | 147 + venv/Lib/site-packages/bs4/tests/test_docs.py | 36 + .../site-packages/bs4/tests/test_html5lib.py | 190 + .../bs4/tests/test_htmlparser.py | 97 + venv/Lib/site-packages/bs4/tests/test_lxml.py | 115 + venv/Lib/site-packages/bs4/tests/test_soup.py | 728 ++ venv/Lib/site-packages/bs4/tests/test_tree.py | 2324 +++++ .../certifi-2020.6.20.dist-info/INSTALLER | 1 + .../certifi-2020.6.20.dist-info/LICENSE | 21 + .../certifi-2020.6.20.dist-info/METADATA | 82 + .../certifi-2020.6.20.dist-info/RECORD | 13 + .../certifi-2020.6.20.dist-info/WHEEL | 6 + .../certifi-2020.6.20.dist-info/top_level.txt | 1 + venv/Lib/site-packages/certifi/__init__.py | 3 + venv/Lib/site-packages/certifi/__main__.py | 12 + .../__pycache__/__init__.cpython-36.pyc | Bin 0 -> 248 bytes .../__pycache__/__main__.cpython-36.pyc | Bin 0 -> 413 bytes .../certifi/__pycache__/core.cpython-36.pyc | Bin 0 -> 1117 bytes venv/Lib/site-packages/certifi/cacert.pem | 4620 +++++++++ venv/Lib/site-packages/certifi/core.py | 60 + .../chardet-3.0.4.dist-info/DESCRIPTION.rst | 70 + .../chardet-3.0.4.dist-info/INSTALLER | 1 + .../chardet-3.0.4.dist-info/METADATA | 96 + .../chardet-3.0.4.dist-info/RECORD | 91 + .../chardet-3.0.4.dist-info/WHEEL | 6 + .../chardet-3.0.4.dist-info/entry_points.txt | 3 + .../chardet-3.0.4.dist-info/metadata.json | 1 + .../chardet-3.0.4.dist-info/top_level.txt | 1 + venv/Lib/site-packages/chardet/__init__.py | 39 + .../__pycache__/__init__.cpython-36.pyc | Bin 0 -> 816 bytes .../__pycache__/big5freq.cpython-36.pyc | Bin 0 -> 54703 bytes .../__pycache__/big5prober.cpython-36.pyc | Bin 0 -> 1092 bytes .../chardistribution.cpython-36.pyc | Bin 0 -> 6288 bytes .../charsetgroupprober.cpython-36.pyc | Bin 0 -> 2199 bytes .../__pycache__/charsetprober.cpython-36.pyc | Bin 0 -> 3425 bytes .../codingstatemachine.cpython-36.pyc | Bin 0 -> 2856 bytes .../chardet/__pycache__/compat.cpython-36.pyc | Bin 0 -> 332 bytes .../__pycache__/cp949prober.cpython-36.pyc | Bin 0 -> 1099 bytes .../chardet/__pycache__/enums.cpython-36.pyc | Bin 0 -> 2590 bytes .../__pycache__/escprober.cpython-36.pyc | Bin 0 -> 2581 bytes .../chardet/__pycache__/escsm.cpython-36.pyc | Bin 0 -> 7338 bytes .../__pycache__/eucjpprober.cpython-36.pyc | Bin 0 -> 2385 bytes .../__pycache__/euckrfreq.cpython-36.pyc | Bin 0 -> 24089 bytes .../__pycache__/euckrprober.cpython-36.pyc | Bin 0 -> 1100 bytes .../__pycache__/euctwfreq.cpython-36.pyc | Bin 0 -> 54712 bytes .../__pycache__/euctwprober.cpython-36.pyc | Bin 0 -> 1100 bytes .../__pycache__/gb2312freq.cpython-36.pyc | Bin 0 -> 38354 bytes .../__pycache__/gb2312prober.cpython-36.pyc | Bin 0 -> 1108 bytes .../__pycache__/hebrewprober.cpython-36.pyc | Bin 0 -> 2942 bytes .../__pycache__/jisfreq.cpython-36.pyc | Bin 0 -> 44498 bytes .../chardet/__pycache__/jpcntx.cpython-36.pyc | Bin 0 -> 38637 bytes .../langbulgarianmodel.cpython-36.pyc | Bin 0 -> 24852 bytes .../langcyrillicmodel.cpython-36.pyc | Bin 0 -> 30403 bytes .../__pycache__/langgreekmodel.cpython-36.pyc | Bin 0 -> 24530 bytes .../langhebrewmodel.cpython-36.pyc | Bin 0 -> 23384 bytes .../langhungarianmodel.cpython-36.pyc | Bin 0 -> 24826 bytes .../__pycache__/langthaimodel.cpython-36.pyc | Bin 0 -> 23363 bytes .../langturkishmodel.cpython-36.pyc | Bin 0 -> 23381 bytes .../__pycache__/latin1prober.cpython-36.pyc | Bin 0 -> 2913 bytes .../mbcharsetprober.cpython-36.pyc | Bin 0 -> 2204 bytes .../mbcsgroupprober.cpython-36.pyc | Bin 0 -> 1095 bytes .../chardet/__pycache__/mbcssm.cpython-36.pyc | Bin 0 -> 17548 bytes .../sbcharsetprober.cpython-36.pyc | Bin 0 -> 2957 bytes .../sbcsgroupprober.cpython-36.pyc | Bin 0 -> 1585 bytes .../__pycache__/sjisprober.cpython-36.pyc | Bin 0 -> 2411 bytes .../universaldetector.cpython-36.pyc | Bin 0 -> 5806 bytes .../__pycache__/utf8prober.cpython-36.pyc | Bin 0 -> 1942 bytes .../__pycache__/version.cpython-36.pyc | Bin 0 -> 411 bytes venv/Lib/site-packages/chardet/big5freq.py | 386 + venv/Lib/site-packages/chardet/big5prober.py | 47 + .../site-packages/chardet/chardistribution.py | 233 + .../chardet/charsetgroupprober.py | 106 + .../site-packages/chardet/charsetprober.py | 145 + .../Lib/site-packages/chardet/cli/__init__.py | 1 + .../cli/__pycache__/__init__.cpython-36.pyc | Bin 0 -> 168 bytes .../cli/__pycache__/chardetect.cpython-36.pyc | Bin 0 -> 3056 bytes .../site-packages/chardet/cli/chardetect.py | 85 + .../chardet/codingstatemachine.py | 88 + venv/Lib/site-packages/chardet/compat.py | 34 + venv/Lib/site-packages/chardet/cp949prober.py | 49 + venv/Lib/site-packages/chardet/enums.py | 76 + venv/Lib/site-packages/chardet/escprober.py | 101 + venv/Lib/site-packages/chardet/escsm.py | 246 + venv/Lib/site-packages/chardet/eucjpprober.py | 92 + venv/Lib/site-packages/chardet/euckrfreq.py | 195 + venv/Lib/site-packages/chardet/euckrprober.py | 47 + venv/Lib/site-packages/chardet/euctwfreq.py | 387 + venv/Lib/site-packages/chardet/euctwprober.py | 46 + venv/Lib/site-packages/chardet/gb2312freq.py | 283 + .../Lib/site-packages/chardet/gb2312prober.py | 46 + .../Lib/site-packages/chardet/hebrewprober.py | 292 + venv/Lib/site-packages/chardet/jisfreq.py | 325 + venv/Lib/site-packages/chardet/jpcntx.py | 233 + .../chardet/langbulgarianmodel.py | 228 + .../chardet/langcyrillicmodel.py | 333 + .../site-packages/chardet/langgreekmodel.py | 225 + .../site-packages/chardet/langhebrewmodel.py | 200 + .../chardet/langhungarianmodel.py | 225 + .../site-packages/chardet/langthaimodel.py | 199 + .../site-packages/chardet/langturkishmodel.py | 193 + .../Lib/site-packages/chardet/latin1prober.py | 145 + .../site-packages/chardet/mbcharsetprober.py | 91 + .../site-packages/chardet/mbcsgroupprober.py | 54 + venv/Lib/site-packages/chardet/mbcssm.py | 572 ++ .../site-packages/chardet/sbcharsetprober.py | 132 + .../site-packages/chardet/sbcsgroupprober.py | 73 + venv/Lib/site-packages/chardet/sjisprober.py | 92 + .../chardet/universaldetector.py | 286 + venv/Lib/site-packages/chardet/utf8prober.py | 82 + venv/Lib/site-packages/chardet/version.py | 9 + venv/Lib/site-packages/easy-install.pth | 2 + .../idna-2.10.dist-info/INSTALLER | 1 + .../idna-2.10.dist-info/LICENSE.rst | 34 + .../idna-2.10.dist-info/METADATA | 243 + .../site-packages/idna-2.10.dist-info/RECORD | 22 + .../site-packages/idna-2.10.dist-info/WHEEL | 6 + .../idna-2.10.dist-info/top_level.txt | 1 + venv/Lib/site-packages/idna/__init__.py | 2 + .../idna/__pycache__/__init__.cpython-36.pyc | Bin 0 -> 227 bytes .../idna/__pycache__/codec.cpython-36.pyc | Bin 0 -> 3074 bytes .../idna/__pycache__/compat.cpython-36.pyc | Bin 0 -> 587 bytes .../idna/__pycache__/core.cpython-36.pyc | Bin 0 -> 9284 bytes .../idna/__pycache__/idnadata.cpython-36.pyc | Bin 0 -> 30719 bytes .../idna/__pycache__/intranges.cpython-36.pyc | Bin 0 -> 1788 bytes .../__pycache__/package_data.cpython-36.pyc | Bin 0 -> 182 bytes .../idna/__pycache__/uts46data.cpython-36.pyc | Bin 0 -> 246384 bytes venv/Lib/site-packages/idna/codec.py | 118 + venv/Lib/site-packages/idna/compat.py | 12 + venv/Lib/site-packages/idna/core.py | 400 + venv/Lib/site-packages/idna/idnadata.py | 2050 ++++ venv/Lib/site-packages/idna/intranges.py | 53 + venv/Lib/site-packages/idna/package_data.py | 2 + venv/Lib/site-packages/idna/uts46data.py | 8357 +++++++++++++++++ .../pip-19.0.3-py3.6.egg/EGG-INFO/PKG-INFO | 73 + .../pip-19.0.3-py3.6.egg/EGG-INFO/SOURCES.txt | 391 + .../EGG-INFO/dependency_links.txt | 1 + .../EGG-INFO/entry_points.txt | 5 + .../EGG-INFO/not-zip-safe | 1 + .../EGG-INFO/top_level.txt | 1 + .../pip-19.0.3-py3.6.egg/pip/__init__.py | 1 + .../pip-19.0.3-py3.6.egg/pip/__main__.py | 19 + .../pip/_internal/__init__.py | 78 + .../pip/_internal/build_env.py | 215 + .../pip/_internal/cache.py | 224 + .../pip/_internal/cli/__init__.py | 4 + .../pip/_internal/cli/autocompletion.py | 152 + .../pip/_internal/cli/base_command.py | 341 + .../pip/_internal/cli/cmdoptions.py | 809 ++ .../pip/_internal/cli/main_parser.py | 104 + .../pip/_internal/cli/parser.py | 261 + .../pip/_internal/cli/status_codes.py | 8 + .../pip/_internal/commands/__init__.py | 79 + .../pip/_internal/commands/check.py | 41 + .../pip/_internal/commands/completion.py | 94 + .../pip/_internal/commands/configuration.py | 227 + .../pip/_internal/commands/download.py | 176 + .../pip/_internal/commands/freeze.py | 96 + .../pip/_internal/commands/hash.py | 57 + .../pip/_internal/commands/help.py | 37 + .../pip/_internal/commands/install.py | 566 ++ .../pip/_internal/commands/list.py | 301 + .../pip/_internal/commands/search.py | 135 + .../pip/_internal/commands/show.py | 168 + .../pip/_internal/commands/uninstall.py | 78 + .../pip/_internal/commands/wheel.py | 186 + .../pip/_internal/configuration.py | 387 + .../pip/_internal/download.py | 971 ++ .../pip/_internal/exceptions.py | 274 + .../pip/_internal/index.py | 990 ++ .../pip/_internal/locations.py | 211 + .../pip/_internal/models/__init__.py | 2 + .../pip/_internal/models/candidate.py | 31 + .../pip/_internal/models/format_control.py | 73 + .../pip/_internal/models/index.py | 31 + .../pip/_internal/models/link.py | 163 + .../pip/_internal/operations/__init__.py | 0 .../pip/_internal/operations/check.py | 155 + .../pip/_internal/operations/freeze.py | 247 + .../pip/_internal/operations/prepare.py | 413 + .../pip/_internal/pep425tags.py | 381 + .../pip/_internal/pyproject.py | 171 + .../pip/_internal/req/__init__.py | 77 + .../pip/_internal/req/constructors.py | 339 + .../pip/_internal/req/req_file.py | 382 + .../pip/_internal/req/req_install.py | 1021 ++ .../pip/_internal/req/req_set.py | 197 + .../pip/_internal/req/req_tracker.py | 88 + .../pip/_internal/req/req_uninstall.py | 596 ++ .../pip/_internal/resolve.py | 393 + .../pip/_internal/utils/__init__.py | 0 .../pip/_internal/utils/appdirs.py | 270 + .../pip/_internal/utils/compat.py | 264 + .../pip/_internal/utils/deprecation.py | 90 + .../pip/_internal/utils/encoding.py | 39 + .../pip/_internal/utils/filesystem.py | 30 + .../pip/_internal/utils/glibc.py | 93 + .../pip/_internal/utils/hashes.py | 115 + .../pip/_internal/utils/logging.py | 318 + .../pip/_internal/utils/misc.py | 1040 ++ .../pip/_internal/utils/models.py | 40 + .../pip/_internal/utils/outdated.py | 164 + .../pip/_internal/utils/packaging.py | 85 + .../pip/_internal/utils/setuptools_build.py | 8 + .../pip/_internal/utils/temp_dir.py | 155 + .../pip/_internal/utils/typing.py | 29 + .../pip/_internal/utils/ui.py | 441 + .../pip/_internal/vcs/__init__.py | 534 ++ .../pip/_internal/vcs/bazaar.py | 114 + .../pip/_internal/vcs/git.py | 369 + .../pip/_internal/vcs/mercurial.py | 103 + .../pip/_internal/vcs/subversion.py | 200 + .../pip/_internal/wheel.py | 1095 +++ .../pip/_vendor/__init__.py | 111 + .../pip/_vendor/appdirs.py | 604 ++ .../pip/_vendor/cachecontrol/__init__.py | 11 + .../pip/_vendor/cachecontrol/_cmd.py | 57 + .../pip/_vendor/cachecontrol/adapter.py | 133 + .../pip/_vendor/cachecontrol/cache.py | 39 + .../_vendor/cachecontrol/caches/__init__.py | 2 + .../_vendor/cachecontrol/caches/file_cache.py | 146 + .../cachecontrol/caches/redis_cache.py | 33 + .../pip/_vendor/cachecontrol/compat.py | 29 + .../pip/_vendor/cachecontrol/controller.py | 367 + .../pip/_vendor/cachecontrol/filewrapper.py | 80 + .../pip/_vendor/cachecontrol/heuristics.py | 135 + .../pip/_vendor/cachecontrol/serialize.py | 186 + .../pip/_vendor/cachecontrol/wrapper.py | 29 + .../pip/_vendor/certifi/__init__.py | 3 + .../pip/_vendor/certifi/__main__.py | 2 + .../pip/_vendor/certifi/cacert.pem | 4512 +++++++++ .../pip/_vendor/certifi/core.py | 20 + .../pip/_vendor/chardet/__init__.py | 39 + .../pip/_vendor/chardet/big5freq.py | 386 + .../pip/_vendor/chardet/big5prober.py | 47 + .../pip/_vendor/chardet/chardistribution.py | 233 + .../pip/_vendor/chardet/charsetgroupprober.py | 106 + .../pip/_vendor/chardet/charsetprober.py | 145 + .../pip/_vendor/chardet/cli/__init__.py | 1 + .../pip/_vendor/chardet/cli/chardetect.py | 85 + .../pip/_vendor/chardet/codingstatemachine.py | 88 + .../pip/_vendor/chardet/compat.py | 34 + .../pip/_vendor/chardet/cp949prober.py | 49 + .../pip/_vendor/chardet/enums.py | 76 + .../pip/_vendor/chardet/escprober.py | 101 + .../pip/_vendor/chardet/escsm.py | 246 + .../pip/_vendor/chardet/eucjpprober.py | 92 + .../pip/_vendor/chardet/euckrfreq.py | 195 + .../pip/_vendor/chardet/euckrprober.py | 47 + .../pip/_vendor/chardet/euctwfreq.py | 387 + .../pip/_vendor/chardet/euctwprober.py | 46 + .../pip/_vendor/chardet/gb2312freq.py | 283 + .../pip/_vendor/chardet/gb2312prober.py | 46 + .../pip/_vendor/chardet/hebrewprober.py | 292 + .../pip/_vendor/chardet/jisfreq.py | 325 + .../pip/_vendor/chardet/jpcntx.py | 233 + .../pip/_vendor/chardet/langbulgarianmodel.py | 228 + .../pip/_vendor/chardet/langcyrillicmodel.py | 333 + .../pip/_vendor/chardet/langgreekmodel.py | 225 + .../pip/_vendor/chardet/langhebrewmodel.py | 200 + .../pip/_vendor/chardet/langhungarianmodel.py | 225 + .../pip/_vendor/chardet/langthaimodel.py | 199 + .../pip/_vendor/chardet/langturkishmodel.py | 193 + .../pip/_vendor/chardet/latin1prober.py | 145 + .../pip/_vendor/chardet/mbcharsetprober.py | 91 + .../pip/_vendor/chardet/mbcsgroupprober.py | 54 + .../pip/_vendor/chardet/mbcssm.py | 572 ++ .../pip/_vendor/chardet/sbcharsetprober.py | 132 + .../pip/_vendor/chardet/sbcsgroupprober.py | 73 + .../pip/_vendor/chardet/sjisprober.py | 92 + .../pip/_vendor/chardet/universaldetector.py | 286 + .../pip/_vendor/chardet/utf8prober.py | 82 + .../pip/_vendor/chardet/version.py | 9 + .../pip/_vendor/colorama/__init__.py | 6 + .../pip/_vendor/colorama/ansi.py | 102 + .../pip/_vendor/colorama/ansitowin32.py | 257 + .../pip/_vendor/colorama/initialise.py | 80 + .../pip/_vendor/colorama/win32.py | 152 + .../pip/_vendor/colorama/winterm.py | 169 + .../pip/_vendor/distlib/__init__.py | 23 + .../pip/_vendor/distlib/_backport/__init__.py | 6 + .../pip/_vendor/distlib/_backport/misc.py | 41 + .../pip/_vendor/distlib/_backport/shutil.py | 761 ++ .../_vendor/distlib/_backport/sysconfig.cfg | 84 + .../_vendor/distlib/_backport/sysconfig.py | 788 ++ .../pip/_vendor/distlib/_backport/tarfile.py | 2607 +++++ .../pip/_vendor/distlib/compat.py | 1120 +++ .../pip/_vendor/distlib/database.py | 1339 +++ .../pip/_vendor/distlib/index.py | 516 + .../pip/_vendor/distlib/locators.py | 1295 +++ .../pip/_vendor/distlib/manifest.py | 393 + .../pip/_vendor/distlib/markers.py | 131 + .../pip/_vendor/distlib/metadata.py | 1094 +++ .../pip/_vendor/distlib/resources.py | 355 + .../pip/_vendor/distlib/scripts.py | 417 + .../pip/_vendor/distlib/t32.exe | Bin 0 -> 92672 bytes .../pip/_vendor/distlib/t64.exe | Bin 0 -> 102400 bytes .../pip/_vendor/distlib/util.py | 1756 ++++ .../pip/_vendor/distlib/version.py | 736 ++ .../pip/_vendor/distlib/w32.exe | Bin 0 -> 89088 bytes .../pip/_vendor/distlib/w64.exe | Bin 0 -> 99328 bytes .../pip/_vendor/distlib/wheel.py | 988 ++ .../pip/_vendor/distro.py | 1197 +++ .../pip/_vendor/html5lib/__init__.py | 35 + .../pip/_vendor/html5lib/_ihatexml.py | 288 + .../pip/_vendor/html5lib/_inputstream.py | 923 ++ .../pip/_vendor/html5lib/_tokenizer.py | 1721 ++++ .../pip/_vendor/html5lib/_trie/__init__.py | 14 + .../pip/_vendor/html5lib/_trie/_base.py | 37 + .../pip/_vendor/html5lib/_trie/datrie.py | 44 + .../pip/_vendor/html5lib/_trie/py.py | 67 + .../pip/_vendor/html5lib/_utils.py | 124 + .../pip/_vendor/html5lib/constants.py | 2947 ++++++ .../pip/_vendor/html5lib/filters/__init__.py | 0 .../filters/alphabeticalattributes.py | 29 + .../pip/_vendor/html5lib/filters/base.py | 12 + .../html5lib/filters/inject_meta_charset.py | 73 + .../pip/_vendor/html5lib/filters/lint.py | 93 + .../_vendor/html5lib/filters/optionaltags.py | 207 + .../pip/_vendor/html5lib/filters/sanitizer.py | 896 ++ .../_vendor/html5lib/filters/whitespace.py | 38 + .../pip/_vendor/html5lib/html5parser.py | 2791 ++++++ .../pip/_vendor/html5lib/serializer.py | 409 + .../_vendor/html5lib/treeadapters/__init__.py | 30 + .../_vendor/html5lib/treeadapters/genshi.py | 54 + .../pip/_vendor/html5lib/treeadapters/sax.py | 50 + .../_vendor/html5lib/treebuilders/__init__.py | 88 + .../pip/_vendor/html5lib/treebuilders/base.py | 417 + .../pip/_vendor/html5lib/treebuilders/dom.py | 236 + .../_vendor/html5lib/treebuilders/etree.py | 340 + .../html5lib/treebuilders/etree_lxml.py | 366 + .../_vendor/html5lib/treewalkers/__init__.py | 154 + .../pip/_vendor/html5lib/treewalkers/base.py | 252 + .../pip/_vendor/html5lib/treewalkers/dom.py | 43 + .../pip/_vendor/html5lib/treewalkers/etree.py | 130 + .../html5lib/treewalkers/etree_lxml.py | 213 + .../_vendor/html5lib/treewalkers/genshi.py | 69 + .../pip/_vendor/idna/__init__.py | 2 + .../pip/_vendor/idna/codec.py | 118 + .../pip/_vendor/idna/compat.py | 12 + .../pip/_vendor/idna/core.py | 396 + .../pip/_vendor/idna/idnadata.py | 1979 ++++ .../pip/_vendor/idna/intranges.py | 53 + .../pip/_vendor/idna/package_data.py | 2 + .../pip/_vendor/idna/uts46data.py | 8205 ++++++++++++++++ .../pip/_vendor/ipaddress.py | 2419 +++++ .../pip/_vendor/lockfile/__init__.py | 347 + .../pip/_vendor/lockfile/linklockfile.py | 73 + .../pip/_vendor/lockfile/mkdirlockfile.py | 84 + .../pip/_vendor/lockfile/pidlockfile.py | 190 + .../pip/_vendor/lockfile/sqlitelockfile.py | 156 + .../pip/_vendor/lockfile/symlinklockfile.py | 70 + .../pip/_vendor/msgpack/__init__.py | 66 + .../pip/_vendor/msgpack/_version.py | 1 + .../pip/_vendor/msgpack/exceptions.py | 41 + .../pip/_vendor/msgpack/fallback.py | 977 ++ .../pip/_vendor/packaging/__about__.py | 27 + .../pip/_vendor/packaging/__init__.py | 26 + .../pip/_vendor/packaging/_compat.py | 31 + .../pip/_vendor/packaging/_structures.py | 68 + .../pip/_vendor/packaging/markers.py | 296 + .../pip/_vendor/packaging/requirements.py | 138 + .../pip/_vendor/packaging/specifiers.py | 749 ++ .../pip/_vendor/packaging/utils.py | 57 + .../pip/_vendor/packaging/version.py | 420 + .../pip/_vendor/pep517/__init__.py | 4 + .../pip/_vendor/pep517/_in_process.py | 207 + .../pip/_vendor/pep517/build.py | 108 + .../pip/_vendor/pep517/check.py | 202 + .../pip/_vendor/pep517/colorlog.py | 115 + .../pip/_vendor/pep517/compat.py | 23 + .../pip/_vendor/pep517/envbuild.py | 158 + .../pip/_vendor/pep517/wrappers.py | 163 + .../pip/_vendor/pkg_resources/__init__.py | 3171 +++++++ .../pip/_vendor/pkg_resources/py31compat.py | 23 + .../pip/_vendor/progress/__init__.py | 127 + .../pip/_vendor/progress/bar.py | 94 + .../pip/_vendor/progress/counter.py | 48 + .../pip/_vendor/progress/helpers.py | 91 + .../pip/_vendor/progress/spinner.py | 44 + .../pip/_vendor/pyparsing.py | 6452 +++++++++++++ .../pip/_vendor/pytoml/__init__.py | 4 + .../pip/_vendor/pytoml/core.py | 13 + .../pip/_vendor/pytoml/parser.py | 341 + .../pip/_vendor/pytoml/test.py | 30 + .../pip/_vendor/pytoml/utils.py | 67 + .../pip/_vendor/pytoml/writer.py | 106 + .../pip/_vendor/requests/__init__.py | 133 + .../pip/_vendor/requests/__version__.py | 14 + .../pip/_vendor/requests/_internal_utils.py | 42 + .../pip/_vendor/requests/adapters.py | 533 ++ .../pip/_vendor/requests/api.py | 158 + .../pip/_vendor/requests/auth.py | 305 + .../pip/_vendor/requests/certs.py | 18 + .../pip/_vendor/requests/compat.py | 74 + .../pip/_vendor/requests/cookies.py | 549 ++ .../pip/_vendor/requests/exceptions.py | 126 + .../pip/_vendor/requests/help.py | 119 + .../pip/_vendor/requests/hooks.py | 34 + .../pip/_vendor/requests/models.py | 953 ++ .../pip/_vendor/requests/packages.py | 16 + .../pip/_vendor/requests/sessions.py | 770 ++ .../pip/_vendor/requests/status_codes.py | 120 + .../pip/_vendor/requests/structures.py | 103 + .../pip/_vendor/requests/utils.py | 977 ++ .../pip/_vendor/retrying.py | 267 + .../pip-19.0.3-py3.6.egg/pip/_vendor/six.py | 952 ++ .../pip/_vendor/urllib3/__init__.py | 92 + .../pip/_vendor/urllib3/_collections.py | 329 + .../pip/_vendor/urllib3/connection.py | 391 + .../pip/_vendor/urllib3/connectionpool.py | 896 ++ .../pip/_vendor/urllib3/contrib/__init__.py | 0 .../urllib3/contrib/_appengine_environ.py | 30 + .../contrib/_securetransport/__init__.py | 0 .../contrib/_securetransport/bindings.py | 593 ++ .../contrib/_securetransport/low_level.py | 346 + .../pip/_vendor/urllib3/contrib/appengine.py | 289 + .../pip/_vendor/urllib3/contrib/ntlmpool.py | 111 + .../pip/_vendor/urllib3/contrib/pyopenssl.py | 466 + .../urllib3/contrib/securetransport.py | 804 ++ .../pip/_vendor/urllib3/contrib/socks.py | 192 + .../pip/_vendor/urllib3/exceptions.py | 246 + .../pip/_vendor/urllib3/fields.py | 178 + .../pip/_vendor/urllib3/filepost.py | 98 + .../pip/_vendor/urllib3/packages/__init__.py | 5 + .../urllib3/packages/backports/__init__.py | 0 .../urllib3/packages/backports/makefile.py | 53 + .../pip/_vendor/urllib3/packages/six.py | 868 ++ .../packages/ssl_match_hostname/__init__.py | 19 + .../ssl_match_hostname/_implementation.py | 156 + .../pip/_vendor/urllib3/poolmanager.py | 450 + .../pip/_vendor/urllib3/request.py | 150 + .../pip/_vendor/urllib3/response.py | 705 ++ .../pip/_vendor/urllib3/util/__init__.py | 54 + .../pip/_vendor/urllib3/util/connection.py | 134 + .../pip/_vendor/urllib3/util/queue.py | 21 + .../pip/_vendor/urllib3/util/request.py | 118 + .../pip/_vendor/urllib3/util/response.py | 87 + .../pip/_vendor/urllib3/util/retry.py | 411 + .../pip/_vendor/urllib3/util/ssl_.py | 381 + .../pip/_vendor/urllib3/util/timeout.py | 242 + .../pip/_vendor/urllib3/util/url.py | 230 + .../pip/_vendor/urllib3/util/wait.py | 150 + .../pip/_vendor/webencodings/__init__.py | 342 + .../pip/_vendor/webencodings/labels.py | 231 + .../pip/_vendor/webencodings/mklabels.py | 59 + .../pip/_vendor/webencodings/tests.py | 153 + .../_vendor/webencodings/x_user_defined.py | 325 + .../requests-2.24.0.dist-info/DESCRIPTION.rst | 136 + .../requests-2.24.0.dist-info/INSTALLER | 1 + .../requests-2.24.0.dist-info/LICENSE.txt | 13 + .../requests-2.24.0.dist-info/METADATA | 177 + .../requests-2.24.0.dist-info/RECORD | 44 + .../requests-2.24.0.dist-info/WHEEL | 6 + .../requests-2.24.0.dist-info/metadata.json | 1 + .../requests-2.24.0.dist-info/top_level.txt | 1 + venv/Lib/site-packages/requests/__init__.py | 139 + .../__pycache__/__init__.cpython-36.pyc | Bin 0 -> 3428 bytes .../__pycache__/__version__.cpython-36.pyc | Bin 0 -> 529 bytes .../_internal_utils.cpython-36.pyc | Bin 0 -> 1282 bytes .../__pycache__/adapters.cpython-36.pyc | Bin 0 -> 16801 bytes .../requests/__pycache__/api.cpython-36.pyc | Bin 0 -> 6711 bytes .../requests/__pycache__/auth.cpython-36.pyc | Bin 0 -> 8326 bytes .../requests/__pycache__/certs.cpython-36.pyc | Bin 0 -> 595 bytes .../__pycache__/compat.cpython-36.pyc | Bin 0 -> 1629 bytes .../__pycache__/cookies.cpython-36.pyc | Bin 0 -> 18761 bytes .../__pycache__/exceptions.cpython-36.pyc | Bin 0 -> 5466 bytes .../requests/__pycache__/help.cpython-36.pyc | Bin 0 -> 2602 bytes .../requests/__pycache__/hooks.cpython-36.pyc | Bin 0 -> 954 bytes .../__pycache__/models.cpython-36.pyc | Bin 0 -> 24069 bytes .../__pycache__/packages.cpython-36.pyc | Bin 0 -> 406 bytes .../__pycache__/sessions.cpython-36.pyc | Bin 0 -> 19405 bytes .../__pycache__/status_codes.cpython-36.pyc | Bin 0 -> 4811 bytes .../__pycache__/structures.cpython-36.pyc | Bin 0 -> 4383 bytes .../requests/__pycache__/utils.cpython-36.pyc | Bin 0 -> 22268 bytes .../Lib/site-packages/requests/__version__.py | 14 + .../site-packages/requests/_internal_utils.py | 42 + venv/Lib/site-packages/requests/adapters.py | 533 ++ venv/Lib/site-packages/requests/api.py | 161 + venv/Lib/site-packages/requests/auth.py | 305 + venv/Lib/site-packages/requests/certs.py | 18 + venv/Lib/site-packages/requests/compat.py | 72 + venv/Lib/site-packages/requests/cookies.py | 549 ++ venv/Lib/site-packages/requests/exceptions.py | 123 + venv/Lib/site-packages/requests/help.py | 119 + venv/Lib/site-packages/requests/hooks.py | 34 + venv/Lib/site-packages/requests/models.py | 954 ++ venv/Lib/site-packages/requests/packages.py | 14 + venv/Lib/site-packages/requests/sessions.py | 769 ++ .../site-packages/requests/status_codes.py | 123 + venv/Lib/site-packages/requests/structures.py | 105 + venv/Lib/site-packages/requests/utils.py | 982 ++ .../site-packages/setuptools-40.8.0-py3.6.egg | Bin 0 -> 571910 bytes venv/Lib/site-packages/setuptools.pth | 1 + .../soupsieve-2.0.1.dist-info/INSTALLER | 1 + .../soupsieve-2.0.1.dist-info/LICENSE.md | 21 + .../soupsieve-2.0.1.dist-info/METADATA | 124 + .../soupsieve-2.0.1.dist-info/RECORD | 18 + .../soupsieve-2.0.1.dist-info/WHEEL | 5 + .../soupsieve-2.0.1.dist-info/top_level.txt | 1 + venv/Lib/site-packages/soupsieve/__init__.py | 111 + venv/Lib/site-packages/soupsieve/__meta__.py | 189 + .../__pycache__/__init__.cpython-36.pyc | Bin 0 -> 3672 bytes .../__pycache__/__meta__.cpython-36.pyc | Bin 0 -> 5705 bytes .../__pycache__/css_match.cpython-36.pyc | Bin 0 -> 33349 bytes .../__pycache__/css_parser.cpython-36.pyc | Bin 0 -> 27032 bytes .../__pycache__/css_types.cpython-36.pyc | Bin 0 -> 11251 bytes .../soupsieve/__pycache__/util.cpython-36.pyc | Bin 0 -> 3093 bytes venv/Lib/site-packages/soupsieve/css_match.py | 1497 +++ .../Lib/site-packages/soupsieve/css_parser.py | 1194 +++ venv/Lib/site-packages/soupsieve/css_types.py | 345 + venv/Lib/site-packages/soupsieve/util.py | 121 + .../urllib3-1.25.10.dist-info/INSTALLER | 1 + .../urllib3-1.25.10.dist-info/LICENSE.txt | 21 + .../urllib3-1.25.10.dist-info/METADATA | 1281 +++ .../urllib3-1.25.10.dist-info/RECORD | 80 + .../urllib3-1.25.10.dist-info/WHEEL | 6 + .../urllib3-1.25.10.dist-info/top_level.txt | 1 + venv/Lib/site-packages/urllib3/__init__.py | 87 + .../__pycache__/__init__.cpython-36.pyc | Bin 0 -> 2195 bytes .../__pycache__/_collections.cpython-36.pyc | Bin 0 -> 10665 bytes .../__pycache__/_version.cpython-36.pyc | Bin 0 -> 187 bytes .../__pycache__/connection.cpython-36.pyc | Bin 0 -> 10230 bytes .../__pycache__/connectionpool.cpython-36.pyc | Bin 0 -> 23975 bytes .../__pycache__/exceptions.cpython-36.pyc | Bin 0 -> 11099 bytes .../urllib3/__pycache__/fields.cpython-36.pyc | Bin 0 -> 8077 bytes .../__pycache__/filepost.cpython-36.pyc | Bin 0 -> 2733 bytes .../__pycache__/poolmanager.cpython-36.pyc | Bin 0 -> 13752 bytes .../__pycache__/request.cpython-36.pyc | Bin 0 -> 5565 bytes .../__pycache__/response.cpython-36.pyc | Bin 0 -> 20515 bytes .../Lib/site-packages/urllib3/_collections.py | 336 + venv/Lib/site-packages/urllib3/_version.py | 2 + venv/Lib/site-packages/urllib3/connection.py | 424 + .../site-packages/urllib3/connectionpool.py | 1035 ++ .../site-packages/urllib3/contrib/__init__.py | 0 .../__pycache__/__init__.cpython-36.pyc | Bin 0 -> 172 bytes .../_appengine_environ.cpython-36.pyc | Bin 0 -> 1381 bytes .../__pycache__/appengine.cpython-36.pyc | Bin 0 -> 8129 bytes .../__pycache__/ntlmpool.cpython-36.pyc | Bin 0 -> 3218 bytes .../__pycache__/pyopenssl.cpython-36.pyc | Bin 0 -> 14951 bytes .../securetransport.cpython-36.pyc | Bin 0 -> 19698 bytes .../contrib/__pycache__/socks.cpython-36.pyc | Bin 0 -> 5488 bytes .../urllib3/contrib/_appengine_environ.py | 36 + .../contrib/_securetransport/__init__.py | 0 .../__pycache__/__init__.cpython-36.pyc | Bin 0 -> 189 bytes .../__pycache__/bindings.cpython-36.pyc | Bin 0 -> 10689 bytes .../__pycache__/low_level.cpython-36.pyc | Bin 0 -> 7433 bytes .../contrib/_securetransport/bindings.py | 510 + .../contrib/_securetransport/low_level.py | 328 + .../urllib3/contrib/appengine.py | 314 + .../site-packages/urllib3/contrib/ntlmpool.py | 121 + .../urllib3/contrib/pyopenssl.py | 501 + .../urllib3/contrib/securetransport.py | 864 ++ .../site-packages/urllib3/contrib/socks.py | 210 + venv/Lib/site-packages/urllib3/exceptions.py | 272 + venv/Lib/site-packages/urllib3/fields.py | 273 + venv/Lib/site-packages/urllib3/filepost.py | 98 + .../urllib3/packages/__init__.py | 5 + .../__pycache__/__init__.cpython-36.pyc | Bin 0 -> 298 bytes .../packages/__pycache__/six.cpython-36.pyc | Bin 0 -> 26525 bytes .../urllib3/packages/backports/__init__.py | 0 .../__pycache__/__init__.cpython-36.pyc | Bin 0 -> 183 bytes .../__pycache__/makefile.cpython-36.pyc | Bin 0 -> 1273 bytes .../urllib3/packages/backports/makefile.py | 52 + .../Lib/site-packages/urllib3/packages/six.py | 1021 ++ .../packages/ssl_match_hostname/__init__.py | 19 + .../__pycache__/__init__.cpython-36.pyc | Bin 0 -> 559 bytes .../_implementation.cpython-36.pyc | Bin 0 -> 3266 bytes .../ssl_match_hostname/_implementation.py | 160 + venv/Lib/site-packages/urllib3/poolmanager.py | 492 + venv/Lib/site-packages/urllib3/request.py | 171 + venv/Lib/site-packages/urllib3/response.py | 820 ++ .../site-packages/urllib3/util/__init__.py | 46 + .../util/__pycache__/__init__.cpython-36.pyc | Bin 0 -> 1137 bytes .../__pycache__/connection.cpython-36.pyc | Bin 0 -> 3141 bytes .../util/__pycache__/queue.cpython-36.pyc | Bin 0 -> 1013 bytes .../util/__pycache__/request.cpython-36.pyc | Bin 0 -> 3307 bytes .../util/__pycache__/response.cpython-36.pyc | Bin 0 -> 1940 bytes .../util/__pycache__/retry.cpython-36.pyc | Bin 0 -> 12923 bytes .../util/__pycache__/ssl_.cpython-36.pyc | Bin 0 -> 10136 bytes .../util/__pycache__/timeout.cpython-36.pyc | Bin 0 -> 8817 bytes .../util/__pycache__/url.cpython-36.pyc | Bin 0 -> 10582 bytes .../util/__pycache__/wait.cpython-36.pyc | Bin 0 -> 3123 bytes .../site-packages/urllib3/util/connection.py | 138 + venv/Lib/site-packages/urllib3/util/queue.py | 21 + .../Lib/site-packages/urllib3/util/request.py | 135 + .../site-packages/urllib3/util/response.py | 86 + venv/Lib/site-packages/urllib3/util/retry.py | 453 + venv/Lib/site-packages/urllib3/util/ssl_.py | 421 + .../Lib/site-packages/urllib3/util/timeout.py | 261 + venv/Lib/site-packages/urllib3/util/url.py | 430 + venv/Lib/site-packages/urllib3/util/wait.py | 153 + venv/Lib/tcl8.6/init.tcl | 818 ++ venv/Scripts/Activate.ps1 | 51 + venv/Scripts/_asyncio.pyd | Bin 0 -> 45720 bytes venv/Scripts/_bz2.pyd | Bin 0 -> 78488 bytes venv/Scripts/_ctypes.pyd | Bin 0 -> 102552 bytes venv/Scripts/_ctypes_test.pyd | Bin 0 -> 30360 bytes venv/Scripts/_decimal.pyd | Bin 0 -> 216728 bytes venv/Scripts/_distutils_findvs.pyd | Bin 0 -> 22168 bytes venv/Scripts/_elementtree.pyd | Bin 0 -> 163992 bytes venv/Scripts/_hashlib.pyd | Bin 0 -> 1121432 bytes venv/Scripts/_lzma.pyd | Bin 0 -> 183960 bytes venv/Scripts/_msi.pyd | Bin 0 -> 33432 bytes venv/Scripts/_multiprocessing.pyd | Bin 0 -> 25752 bytes venv/Scripts/_overlapped.pyd | Bin 0 -> 34456 bytes venv/Scripts/_socket.pyd | Bin 0 -> 63640 bytes venv/Scripts/_sqlite3.pyd | Bin 0 -> 64664 bytes venv/Scripts/_ssl.pyd | Bin 0 -> 1459864 bytes venv/Scripts/_testbuffer.pyd | Bin 0 -> 41624 bytes venv/Scripts/_testcapi.pyd | Bin 0 -> 76440 bytes venv/Scripts/_testconsole.pyd | Bin 0 -> 21144 bytes venv/Scripts/_testimportmultiple.pyd | Bin 0 -> 19608 bytes venv/Scripts/_testmultiphase.pyd | Bin 0 -> 26264 bytes venv/Scripts/_tkinter.pyd | Bin 0 -> 53400 bytes venv/Scripts/activate | 76 + venv/Scripts/activate.bat | 45 + venv/Scripts/chardetect.exe | Bin 0 -> 93098 bytes venv/Scripts/deactivate.bat | 21 + venv/Scripts/easy_install-3.6-script.py | 12 + venv/Scripts/easy_install-3.6.exe | Bin 0 -> 65536 bytes venv/Scripts/easy_install-3.6.exe.manifest | 15 + venv/Scripts/easy_install-script.py | 12 + venv/Scripts/easy_install.exe | Bin 0 -> 65536 bytes venv/Scripts/easy_install.exe.manifest | 15 + venv/Scripts/pip-script.py | 12 + venv/Scripts/pip.exe | Bin 0 -> 65536 bytes venv/Scripts/pip.exe.manifest | 15 + venv/Scripts/pip3-script.py | 12 + venv/Scripts/pip3.6-script.py | 12 + venv/Scripts/pip3.6.exe | Bin 0 -> 65536 bytes venv/Scripts/pip3.6.exe.manifest | 15 + venv/Scripts/pip3.exe | Bin 0 -> 65536 bytes venv/Scripts/pip3.exe.manifest | 15 + venv/Scripts/pyexpat.pyd | Bin 0 -> 164504 bytes venv/Scripts/python.exe | Bin 0 -> 97944 bytes venv/Scripts/python3.dll | Bin 0 -> 58520 bytes venv/Scripts/python36.dll | Bin 0 -> 3303064 bytes venv/Scripts/pythonw.exe | Bin 0 -> 96408 bytes venv/Scripts/select.pyd | Bin 0 -> 23704 bytes venv/Scripts/sqlite3.dll | Bin 0 -> 880792 bytes venv/Scripts/tcl86t.dll | Bin 0 -> 1307136 bytes venv/Scripts/tk86t.dll | Bin 0 -> 1550336 bytes venv/Scripts/unicodedata.pyd | Bin 0 -> 896152 bytes venv/Scripts/vcruntime140.dll | Bin 0 -> 83784 bytes venv/Scripts/winsound.pyd | Bin 0 -> 24728 bytes venv/pyvenv.cfg | 3 + 689 files changed, 162322 insertions(+) create mode 100644 .idea/.gitignore create mode 100644 .idea/Amazon-Flipkart-Price-Comparison-Engine.iml create mode 100644 .idea/inspectionProfiles/profiles_settings.xml create mode 100644 .idea/misc.xml create mode 100644 .idea/modules.xml create mode 100644 .idea/vcs.xml create mode 100644 venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/AUTHORS create mode 100644 venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/COPYING.txt create mode 100644 venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/INSTALLER create mode 100644 venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/LICENSE create mode 100644 venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/METADATA create mode 100644 venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/RECORD create mode 100644 venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/WHEEL create mode 100644 venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/top_level.txt create mode 100644 venv/Lib/site-packages/bs4/__init__.py create mode 100644 venv/Lib/site-packages/bs4/__pycache__/__init__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/__pycache__/dammit.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/__pycache__/diagnose.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/__pycache__/element.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/__pycache__/formatter.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/__pycache__/testing.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/builder/__init__.py create mode 100644 venv/Lib/site-packages/bs4/builder/__pycache__/__init__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/builder/__pycache__/_html5lib.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/builder/__pycache__/_htmlparser.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/builder/__pycache__/_lxml.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/builder/_html5lib.py create mode 100644 venv/Lib/site-packages/bs4/builder/_htmlparser.py create mode 100644 venv/Lib/site-packages/bs4/builder/_lxml.py create mode 100644 venv/Lib/site-packages/bs4/dammit.py create mode 100644 venv/Lib/site-packages/bs4/diagnose.py create mode 100644 venv/Lib/site-packages/bs4/element.py create mode 100644 venv/Lib/site-packages/bs4/formatter.py create mode 100644 venv/Lib/site-packages/bs4/testing.py create mode 100644 venv/Lib/site-packages/bs4/tests/__init__.py create mode 100644 venv/Lib/site-packages/bs4/tests/__pycache__/__init__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/tests/__pycache__/test_builder_registry.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/tests/__pycache__/test_docs.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/tests/__pycache__/test_html5lib.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/tests/__pycache__/test_htmlparser.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/tests/__pycache__/test_lxml.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/tests/__pycache__/test_soup.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/tests/__pycache__/test_tree.cpython-36.pyc create mode 100644 venv/Lib/site-packages/bs4/tests/test_builder_registry.py create mode 100644 venv/Lib/site-packages/bs4/tests/test_docs.py create mode 100644 venv/Lib/site-packages/bs4/tests/test_html5lib.py create mode 100644 venv/Lib/site-packages/bs4/tests/test_htmlparser.py create mode 100644 venv/Lib/site-packages/bs4/tests/test_lxml.py create mode 100644 venv/Lib/site-packages/bs4/tests/test_soup.py create mode 100644 venv/Lib/site-packages/bs4/tests/test_tree.py create mode 100644 venv/Lib/site-packages/certifi-2020.6.20.dist-info/INSTALLER create mode 100644 venv/Lib/site-packages/certifi-2020.6.20.dist-info/LICENSE create mode 100644 venv/Lib/site-packages/certifi-2020.6.20.dist-info/METADATA create mode 100644 venv/Lib/site-packages/certifi-2020.6.20.dist-info/RECORD create mode 100644 venv/Lib/site-packages/certifi-2020.6.20.dist-info/WHEEL create mode 100644 venv/Lib/site-packages/certifi-2020.6.20.dist-info/top_level.txt create mode 100644 venv/Lib/site-packages/certifi/__init__.py create mode 100644 venv/Lib/site-packages/certifi/__main__.py create mode 100644 venv/Lib/site-packages/certifi/__pycache__/__init__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/certifi/__pycache__/__main__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/certifi/__pycache__/core.cpython-36.pyc create mode 100644 venv/Lib/site-packages/certifi/cacert.pem create mode 100644 venv/Lib/site-packages/certifi/core.py create mode 100644 venv/Lib/site-packages/chardet-3.0.4.dist-info/DESCRIPTION.rst create mode 100644 venv/Lib/site-packages/chardet-3.0.4.dist-info/INSTALLER create mode 100644 venv/Lib/site-packages/chardet-3.0.4.dist-info/METADATA create mode 100644 venv/Lib/site-packages/chardet-3.0.4.dist-info/RECORD create mode 100644 venv/Lib/site-packages/chardet-3.0.4.dist-info/WHEEL create mode 100644 venv/Lib/site-packages/chardet-3.0.4.dist-info/entry_points.txt create mode 100644 venv/Lib/site-packages/chardet-3.0.4.dist-info/metadata.json create mode 100644 venv/Lib/site-packages/chardet-3.0.4.dist-info/top_level.txt create mode 100644 venv/Lib/site-packages/chardet/__init__.py create mode 100644 venv/Lib/site-packages/chardet/__pycache__/__init__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/big5freq.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/big5prober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/chardistribution.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/charsetgroupprober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/charsetprober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/codingstatemachine.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/compat.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/cp949prober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/enums.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/escprober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/escsm.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/eucjpprober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/euckrfreq.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/euckrprober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/euctwfreq.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/euctwprober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/gb2312freq.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/gb2312prober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/hebrewprober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/jisfreq.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/jpcntx.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/langbulgarianmodel.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/langcyrillicmodel.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/langgreekmodel.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/langhebrewmodel.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/langhungarianmodel.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/langthaimodel.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/langturkishmodel.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/latin1prober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/mbcharsetprober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/mbcsgroupprober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/mbcssm.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/sbcharsetprober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/sbcsgroupprober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/sjisprober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/universaldetector.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/utf8prober.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/__pycache__/version.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/big5freq.py create mode 100644 venv/Lib/site-packages/chardet/big5prober.py create mode 100644 venv/Lib/site-packages/chardet/chardistribution.py create mode 100644 venv/Lib/site-packages/chardet/charsetgroupprober.py create mode 100644 venv/Lib/site-packages/chardet/charsetprober.py create mode 100644 venv/Lib/site-packages/chardet/cli/__init__.py create mode 100644 venv/Lib/site-packages/chardet/cli/__pycache__/__init__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/cli/__pycache__/chardetect.cpython-36.pyc create mode 100644 venv/Lib/site-packages/chardet/cli/chardetect.py create mode 100644 venv/Lib/site-packages/chardet/codingstatemachine.py create mode 100644 venv/Lib/site-packages/chardet/compat.py create mode 100644 venv/Lib/site-packages/chardet/cp949prober.py create mode 100644 venv/Lib/site-packages/chardet/enums.py create mode 100644 venv/Lib/site-packages/chardet/escprober.py create mode 100644 venv/Lib/site-packages/chardet/escsm.py create mode 100644 venv/Lib/site-packages/chardet/eucjpprober.py create mode 100644 venv/Lib/site-packages/chardet/euckrfreq.py create mode 100644 venv/Lib/site-packages/chardet/euckrprober.py create mode 100644 venv/Lib/site-packages/chardet/euctwfreq.py create mode 100644 venv/Lib/site-packages/chardet/euctwprober.py create mode 100644 venv/Lib/site-packages/chardet/gb2312freq.py create mode 100644 venv/Lib/site-packages/chardet/gb2312prober.py create mode 100644 venv/Lib/site-packages/chardet/hebrewprober.py create mode 100644 venv/Lib/site-packages/chardet/jisfreq.py create mode 100644 venv/Lib/site-packages/chardet/jpcntx.py create mode 100644 venv/Lib/site-packages/chardet/langbulgarianmodel.py create mode 100644 venv/Lib/site-packages/chardet/langcyrillicmodel.py create mode 100644 venv/Lib/site-packages/chardet/langgreekmodel.py create mode 100644 venv/Lib/site-packages/chardet/langhebrewmodel.py create mode 100644 venv/Lib/site-packages/chardet/langhungarianmodel.py create mode 100644 venv/Lib/site-packages/chardet/langthaimodel.py create mode 100644 venv/Lib/site-packages/chardet/langturkishmodel.py create mode 100644 venv/Lib/site-packages/chardet/latin1prober.py create mode 100644 venv/Lib/site-packages/chardet/mbcharsetprober.py create mode 100644 venv/Lib/site-packages/chardet/mbcsgroupprober.py create mode 100644 venv/Lib/site-packages/chardet/mbcssm.py create mode 100644 venv/Lib/site-packages/chardet/sbcharsetprober.py create mode 100644 venv/Lib/site-packages/chardet/sbcsgroupprober.py create mode 100644 venv/Lib/site-packages/chardet/sjisprober.py create mode 100644 venv/Lib/site-packages/chardet/universaldetector.py create mode 100644 venv/Lib/site-packages/chardet/utf8prober.py create mode 100644 venv/Lib/site-packages/chardet/version.py create mode 100644 venv/Lib/site-packages/easy-install.pth create mode 100644 venv/Lib/site-packages/idna-2.10.dist-info/INSTALLER create mode 100644 venv/Lib/site-packages/idna-2.10.dist-info/LICENSE.rst create mode 100644 venv/Lib/site-packages/idna-2.10.dist-info/METADATA create mode 100644 venv/Lib/site-packages/idna-2.10.dist-info/RECORD create mode 100644 venv/Lib/site-packages/idna-2.10.dist-info/WHEEL create mode 100644 venv/Lib/site-packages/idna-2.10.dist-info/top_level.txt create mode 100644 venv/Lib/site-packages/idna/__init__.py create mode 100644 venv/Lib/site-packages/idna/__pycache__/__init__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/idna/__pycache__/codec.cpython-36.pyc create mode 100644 venv/Lib/site-packages/idna/__pycache__/compat.cpython-36.pyc create mode 100644 venv/Lib/site-packages/idna/__pycache__/core.cpython-36.pyc create mode 100644 venv/Lib/site-packages/idna/__pycache__/idnadata.cpython-36.pyc create mode 100644 venv/Lib/site-packages/idna/__pycache__/intranges.cpython-36.pyc create mode 100644 venv/Lib/site-packages/idna/__pycache__/package_data.cpython-36.pyc create mode 100644 venv/Lib/site-packages/idna/__pycache__/uts46data.cpython-36.pyc create mode 100644 venv/Lib/site-packages/idna/codec.py create mode 100644 venv/Lib/site-packages/idna/compat.py create mode 100644 venv/Lib/site-packages/idna/core.py create mode 100644 venv/Lib/site-packages/idna/idnadata.py create mode 100644 venv/Lib/site-packages/idna/intranges.py create mode 100644 venv/Lib/site-packages/idna/package_data.py create mode 100644 venv/Lib/site-packages/idna/uts46data.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/EGG-INFO/PKG-INFO create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/EGG-INFO/SOURCES.txt create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/EGG-INFO/dependency_links.txt create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/EGG-INFO/entry_points.txt create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/EGG-INFO/not-zip-safe create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/EGG-INFO/top_level.txt create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/__main__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/build_env.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/cache.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/cli/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/cli/autocompletion.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/cli/base_command.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/cli/cmdoptions.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/cli/main_parser.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/cli/parser.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/cli/status_codes.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/commands/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/commands/check.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/commands/completion.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/commands/configuration.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/commands/download.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/commands/freeze.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/commands/hash.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/commands/help.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/commands/install.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/commands/list.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/commands/search.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/commands/show.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/commands/uninstall.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/commands/wheel.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/configuration.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/download.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/exceptions.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/index.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/locations.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/models/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/models/candidate.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/models/format_control.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/models/index.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/models/link.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/operations/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/operations/check.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/operations/freeze.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/operations/prepare.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/pep425tags.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/pyproject.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/req/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/req/constructors.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/req/req_file.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/req/req_install.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/req/req_set.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/req/req_tracker.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/req/req_uninstall.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/resolve.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/appdirs.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/compat.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/deprecation.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/encoding.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/filesystem.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/glibc.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/hashes.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/logging.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/misc.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/models.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/outdated.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/packaging.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/setuptools_build.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/temp_dir.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/typing.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/utils/ui.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/vcs/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/vcs/bazaar.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/vcs/git.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/vcs/mercurial.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/vcs/subversion.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_internal/wheel.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/appdirs.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/cachecontrol/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/cachecontrol/_cmd.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/cachecontrol/adapter.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/cachecontrol/cache.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/cachecontrol/caches/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/cachecontrol/caches/file_cache.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/cachecontrol/caches/redis_cache.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/cachecontrol/compat.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/cachecontrol/controller.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/cachecontrol/filewrapper.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/cachecontrol/heuristics.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/cachecontrol/serialize.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/cachecontrol/wrapper.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/certifi/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/certifi/__main__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/certifi/cacert.pem create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/certifi/core.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/big5freq.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/big5prober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/chardistribution.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/charsetgroupprober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/charsetprober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/cli/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/cli/chardetect.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/codingstatemachine.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/compat.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/cp949prober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/enums.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/escprober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/escsm.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/eucjpprober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/euckrfreq.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/euckrprober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/euctwfreq.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/euctwprober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/gb2312freq.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/gb2312prober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/hebrewprober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/jisfreq.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/jpcntx.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/langbulgarianmodel.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/langcyrillicmodel.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/langgreekmodel.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/langhebrewmodel.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/langhungarianmodel.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/langthaimodel.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/langturkishmodel.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/latin1prober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/mbcharsetprober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/mbcsgroupprober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/mbcssm.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/sbcharsetprober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/sbcsgroupprober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/sjisprober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/universaldetector.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/utf8prober.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/chardet/version.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/colorama/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/colorama/ansi.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/colorama/ansitowin32.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/colorama/initialise.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/colorama/win32.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/colorama/winterm.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/_backport/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/_backport/misc.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/_backport/shutil.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/_backport/sysconfig.cfg create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/_backport/sysconfig.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/_backport/tarfile.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/compat.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/database.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/index.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/locators.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/manifest.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/markers.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/metadata.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/resources.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/scripts.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/t32.exe create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/t64.exe create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/util.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/version.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/w32.exe create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/w64.exe create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distlib/wheel.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/distro.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/_ihatexml.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/_inputstream.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/_tokenizer.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/_trie/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/_trie/_base.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/_trie/datrie.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/_trie/py.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/_utils.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/constants.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/filters/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/filters/alphabeticalattributes.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/filters/base.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/filters/inject_meta_charset.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/filters/lint.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/filters/optionaltags.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/filters/sanitizer.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/filters/whitespace.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/html5parser.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/serializer.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/treeadapters/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/treeadapters/genshi.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/treeadapters/sax.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/treebuilders/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/treebuilders/base.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/treebuilders/dom.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/treebuilders/etree.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/treebuilders/etree_lxml.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/treewalkers/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/treewalkers/base.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/treewalkers/dom.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/treewalkers/etree.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/treewalkers/etree_lxml.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/html5lib/treewalkers/genshi.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/idna/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/idna/codec.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/idna/compat.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/idna/core.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/idna/idnadata.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/idna/intranges.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/idna/package_data.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/idna/uts46data.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/ipaddress.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/lockfile/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/lockfile/linklockfile.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/lockfile/mkdirlockfile.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/lockfile/pidlockfile.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/lockfile/sqlitelockfile.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/lockfile/symlinklockfile.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/msgpack/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/msgpack/_version.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/msgpack/exceptions.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/msgpack/fallback.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/packaging/__about__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/packaging/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/packaging/_compat.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/packaging/_structures.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/packaging/markers.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/packaging/requirements.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/packaging/specifiers.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/packaging/utils.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/packaging/version.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pep517/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pep517/_in_process.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pep517/build.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pep517/check.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pep517/colorlog.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pep517/compat.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pep517/envbuild.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pep517/wrappers.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pkg_resources/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pkg_resources/py31compat.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/progress/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/progress/bar.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/progress/counter.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/progress/helpers.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/progress/spinner.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pyparsing.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pytoml/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pytoml/core.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pytoml/parser.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pytoml/test.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pytoml/utils.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/pytoml/writer.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/__version__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/_internal_utils.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/adapters.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/api.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/auth.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/certs.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/compat.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/cookies.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/exceptions.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/help.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/hooks.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/models.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/packages.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/sessions.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/status_codes.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/structures.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/requests/utils.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/retrying.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/six.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/_collections.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/connection.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/connectionpool.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/contrib/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/contrib/_appengine_environ.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/contrib/_securetransport/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/contrib/_securetransport/bindings.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/contrib/_securetransport/low_level.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/contrib/appengine.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/contrib/ntlmpool.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/contrib/pyopenssl.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/contrib/securetransport.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/contrib/socks.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/exceptions.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/fields.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/filepost.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/packages/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/packages/backports/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/packages/backports/makefile.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/packages/six.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/packages/ssl_match_hostname/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/packages/ssl_match_hostname/_implementation.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/poolmanager.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/request.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/response.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/util/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/util/connection.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/util/queue.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/util/request.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/util/response.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/util/retry.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/util/ssl_.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/util/timeout.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/util/url.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/urllib3/util/wait.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/webencodings/__init__.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/webencodings/labels.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/webencodings/mklabels.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/webencodings/tests.py create mode 100644 venv/Lib/site-packages/pip-19.0.3-py3.6.egg/pip/_vendor/webencodings/x_user_defined.py create mode 100644 venv/Lib/site-packages/requests-2.24.0.dist-info/DESCRIPTION.rst create mode 100644 venv/Lib/site-packages/requests-2.24.0.dist-info/INSTALLER create mode 100644 venv/Lib/site-packages/requests-2.24.0.dist-info/LICENSE.txt create mode 100644 venv/Lib/site-packages/requests-2.24.0.dist-info/METADATA create mode 100644 venv/Lib/site-packages/requests-2.24.0.dist-info/RECORD create mode 100644 venv/Lib/site-packages/requests-2.24.0.dist-info/WHEEL create mode 100644 venv/Lib/site-packages/requests-2.24.0.dist-info/metadata.json create mode 100644 venv/Lib/site-packages/requests-2.24.0.dist-info/top_level.txt create mode 100644 venv/Lib/site-packages/requests/__init__.py create mode 100644 venv/Lib/site-packages/requests/__pycache__/__init__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/__version__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/_internal_utils.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/adapters.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/api.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/auth.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/certs.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/compat.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/cookies.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/exceptions.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/help.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/hooks.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/models.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/packages.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/sessions.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/status_codes.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/structures.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__pycache__/utils.cpython-36.pyc create mode 100644 venv/Lib/site-packages/requests/__version__.py create mode 100644 venv/Lib/site-packages/requests/_internal_utils.py create mode 100644 venv/Lib/site-packages/requests/adapters.py create mode 100644 venv/Lib/site-packages/requests/api.py create mode 100644 venv/Lib/site-packages/requests/auth.py create mode 100644 venv/Lib/site-packages/requests/certs.py create mode 100644 venv/Lib/site-packages/requests/compat.py create mode 100644 venv/Lib/site-packages/requests/cookies.py create mode 100644 venv/Lib/site-packages/requests/exceptions.py create mode 100644 venv/Lib/site-packages/requests/help.py create mode 100644 venv/Lib/site-packages/requests/hooks.py create mode 100644 venv/Lib/site-packages/requests/models.py create mode 100644 venv/Lib/site-packages/requests/packages.py create mode 100644 venv/Lib/site-packages/requests/sessions.py create mode 100644 venv/Lib/site-packages/requests/status_codes.py create mode 100644 venv/Lib/site-packages/requests/structures.py create mode 100644 venv/Lib/site-packages/requests/utils.py create mode 100644 venv/Lib/site-packages/setuptools-40.8.0-py3.6.egg create mode 100644 venv/Lib/site-packages/setuptools.pth create mode 100644 venv/Lib/site-packages/soupsieve-2.0.1.dist-info/INSTALLER create mode 100644 venv/Lib/site-packages/soupsieve-2.0.1.dist-info/LICENSE.md create mode 100644 venv/Lib/site-packages/soupsieve-2.0.1.dist-info/METADATA create mode 100644 venv/Lib/site-packages/soupsieve-2.0.1.dist-info/RECORD create mode 100644 venv/Lib/site-packages/soupsieve-2.0.1.dist-info/WHEEL create mode 100644 venv/Lib/site-packages/soupsieve-2.0.1.dist-info/top_level.txt create mode 100644 venv/Lib/site-packages/soupsieve/__init__.py create mode 100644 venv/Lib/site-packages/soupsieve/__meta__.py create mode 100644 venv/Lib/site-packages/soupsieve/__pycache__/__init__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/soupsieve/__pycache__/__meta__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/soupsieve/__pycache__/css_match.cpython-36.pyc create mode 100644 venv/Lib/site-packages/soupsieve/__pycache__/css_parser.cpython-36.pyc create mode 100644 venv/Lib/site-packages/soupsieve/__pycache__/css_types.cpython-36.pyc create mode 100644 venv/Lib/site-packages/soupsieve/__pycache__/util.cpython-36.pyc create mode 100644 venv/Lib/site-packages/soupsieve/css_match.py create mode 100644 venv/Lib/site-packages/soupsieve/css_parser.py create mode 100644 venv/Lib/site-packages/soupsieve/css_types.py create mode 100644 venv/Lib/site-packages/soupsieve/util.py create mode 100644 venv/Lib/site-packages/urllib3-1.25.10.dist-info/INSTALLER create mode 100644 venv/Lib/site-packages/urllib3-1.25.10.dist-info/LICENSE.txt create mode 100644 venv/Lib/site-packages/urllib3-1.25.10.dist-info/METADATA create mode 100644 venv/Lib/site-packages/urllib3-1.25.10.dist-info/RECORD create mode 100644 venv/Lib/site-packages/urllib3-1.25.10.dist-info/WHEEL create mode 100644 venv/Lib/site-packages/urllib3-1.25.10.dist-info/top_level.txt create mode 100644 venv/Lib/site-packages/urllib3/__init__.py create mode 100644 venv/Lib/site-packages/urllib3/__pycache__/__init__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/__pycache__/_collections.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/__pycache__/_version.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/__pycache__/connection.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/__pycache__/connectionpool.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/__pycache__/exceptions.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/__pycache__/fields.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/__pycache__/filepost.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/__pycache__/poolmanager.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/__pycache__/request.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/__pycache__/response.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/_collections.py create mode 100644 venv/Lib/site-packages/urllib3/_version.py create mode 100644 venv/Lib/site-packages/urllib3/connection.py create mode 100644 venv/Lib/site-packages/urllib3/connectionpool.py create mode 100644 venv/Lib/site-packages/urllib3/contrib/__init__.py create mode 100644 venv/Lib/site-packages/urllib3/contrib/__pycache__/__init__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/contrib/__pycache__/_appengine_environ.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/contrib/__pycache__/appengine.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/contrib/__pycache__/ntlmpool.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/contrib/__pycache__/pyopenssl.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/contrib/__pycache__/securetransport.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/contrib/__pycache__/socks.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/contrib/_appengine_environ.py create mode 100644 venv/Lib/site-packages/urllib3/contrib/_securetransport/__init__.py create mode 100644 venv/Lib/site-packages/urllib3/contrib/_securetransport/__pycache__/__init__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/contrib/_securetransport/__pycache__/bindings.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/contrib/_securetransport/__pycache__/low_level.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/contrib/_securetransport/bindings.py create mode 100644 venv/Lib/site-packages/urllib3/contrib/_securetransport/low_level.py create mode 100644 venv/Lib/site-packages/urllib3/contrib/appengine.py create mode 100644 venv/Lib/site-packages/urllib3/contrib/ntlmpool.py create mode 100644 venv/Lib/site-packages/urllib3/contrib/pyopenssl.py create mode 100644 venv/Lib/site-packages/urllib3/contrib/securetransport.py create mode 100644 venv/Lib/site-packages/urllib3/contrib/socks.py create mode 100644 venv/Lib/site-packages/urllib3/exceptions.py create mode 100644 venv/Lib/site-packages/urllib3/fields.py create mode 100644 venv/Lib/site-packages/urllib3/filepost.py create mode 100644 venv/Lib/site-packages/urllib3/packages/__init__.py create mode 100644 venv/Lib/site-packages/urllib3/packages/__pycache__/__init__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/packages/__pycache__/six.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/packages/backports/__init__.py create mode 100644 venv/Lib/site-packages/urllib3/packages/backports/__pycache__/__init__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/packages/backports/__pycache__/makefile.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/packages/backports/makefile.py create mode 100644 venv/Lib/site-packages/urllib3/packages/six.py create mode 100644 venv/Lib/site-packages/urllib3/packages/ssl_match_hostname/__init__.py create mode 100644 venv/Lib/site-packages/urllib3/packages/ssl_match_hostname/__pycache__/__init__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/packages/ssl_match_hostname/__pycache__/_implementation.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/packages/ssl_match_hostname/_implementation.py create mode 100644 venv/Lib/site-packages/urllib3/poolmanager.py create mode 100644 venv/Lib/site-packages/urllib3/request.py create mode 100644 venv/Lib/site-packages/urllib3/response.py create mode 100644 venv/Lib/site-packages/urllib3/util/__init__.py create mode 100644 venv/Lib/site-packages/urllib3/util/__pycache__/__init__.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/util/__pycache__/connection.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/util/__pycache__/queue.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/util/__pycache__/request.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/util/__pycache__/response.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/util/__pycache__/retry.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/util/__pycache__/ssl_.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/util/__pycache__/timeout.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/util/__pycache__/url.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/util/__pycache__/wait.cpython-36.pyc create mode 100644 venv/Lib/site-packages/urllib3/util/connection.py create mode 100644 venv/Lib/site-packages/urllib3/util/queue.py create mode 100644 venv/Lib/site-packages/urllib3/util/request.py create mode 100644 venv/Lib/site-packages/urllib3/util/response.py create mode 100644 venv/Lib/site-packages/urllib3/util/retry.py create mode 100644 venv/Lib/site-packages/urllib3/util/ssl_.py create mode 100644 venv/Lib/site-packages/urllib3/util/timeout.py create mode 100644 venv/Lib/site-packages/urllib3/util/url.py create mode 100644 venv/Lib/site-packages/urllib3/util/wait.py create mode 100644 venv/Lib/tcl8.6/init.tcl create mode 100644 venv/Scripts/Activate.ps1 create mode 100644 venv/Scripts/_asyncio.pyd create mode 100644 venv/Scripts/_bz2.pyd create mode 100644 venv/Scripts/_ctypes.pyd create mode 100644 venv/Scripts/_ctypes_test.pyd create mode 100644 venv/Scripts/_decimal.pyd create mode 100644 venv/Scripts/_distutils_findvs.pyd create mode 100644 venv/Scripts/_elementtree.pyd create mode 100644 venv/Scripts/_hashlib.pyd create mode 100644 venv/Scripts/_lzma.pyd create mode 100644 venv/Scripts/_msi.pyd create mode 100644 venv/Scripts/_multiprocessing.pyd create mode 100644 venv/Scripts/_overlapped.pyd create mode 100644 venv/Scripts/_socket.pyd create mode 100644 venv/Scripts/_sqlite3.pyd create mode 100644 venv/Scripts/_ssl.pyd create mode 100644 venv/Scripts/_testbuffer.pyd create mode 100644 venv/Scripts/_testcapi.pyd create mode 100644 venv/Scripts/_testconsole.pyd create mode 100644 venv/Scripts/_testimportmultiple.pyd create mode 100644 venv/Scripts/_testmultiphase.pyd create mode 100644 venv/Scripts/_tkinter.pyd create mode 100644 venv/Scripts/activate create mode 100644 venv/Scripts/activate.bat create mode 100644 venv/Scripts/chardetect.exe create mode 100644 venv/Scripts/deactivate.bat create mode 100644 venv/Scripts/easy_install-3.6-script.py create mode 100644 venv/Scripts/easy_install-3.6.exe create mode 100644 venv/Scripts/easy_install-3.6.exe.manifest create mode 100644 venv/Scripts/easy_install-script.py create mode 100644 venv/Scripts/easy_install.exe create mode 100644 venv/Scripts/easy_install.exe.manifest create mode 100644 venv/Scripts/pip-script.py create mode 100644 venv/Scripts/pip.exe create mode 100644 venv/Scripts/pip.exe.manifest create mode 100644 venv/Scripts/pip3-script.py create mode 100644 venv/Scripts/pip3.6-script.py create mode 100644 venv/Scripts/pip3.6.exe create mode 100644 venv/Scripts/pip3.6.exe.manifest create mode 100644 venv/Scripts/pip3.exe create mode 100644 venv/Scripts/pip3.exe.manifest create mode 100644 venv/Scripts/pyexpat.pyd create mode 100644 venv/Scripts/python.exe create mode 100644 venv/Scripts/python3.dll create mode 100644 venv/Scripts/python36.dll create mode 100644 venv/Scripts/pythonw.exe create mode 100644 venv/Scripts/select.pyd create mode 100644 venv/Scripts/sqlite3.dll create mode 100644 venv/Scripts/tcl86t.dll create mode 100644 venv/Scripts/tk86t.dll create mode 100644 venv/Scripts/unicodedata.pyd create mode 100644 venv/Scripts/vcruntime140.dll create mode 100644 venv/Scripts/winsound.pyd create mode 100644 venv/pyvenv.cfg diff --git a/.idea/.gitignore b/.idea/.gitignore new file mode 100644 index 0000000..e7e9d11 --- /dev/null +++ b/.idea/.gitignore @@ -0,0 +1,2 @@ +# Default ignored files +/workspace.xml diff --git a/.idea/Amazon-Flipkart-Price-Comparison-Engine.iml b/.idea/Amazon-Flipkart-Price-Comparison-Engine.iml new file mode 100644 index 0000000..54a760b --- /dev/null +++ b/.idea/Amazon-Flipkart-Price-Comparison-Engine.iml @@ -0,0 +1,10 @@ + + + + + + + + + + \ No newline at end of file diff --git a/.idea/inspectionProfiles/profiles_settings.xml b/.idea/inspectionProfiles/profiles_settings.xml new file mode 100644 index 0000000..105ce2d --- /dev/null +++ b/.idea/inspectionProfiles/profiles_settings.xml @@ -0,0 +1,6 @@ + + + + \ No newline at end of file diff --git a/.idea/misc.xml b/.idea/misc.xml new file mode 100644 index 0000000..0af03d1 --- /dev/null +++ b/.idea/misc.xml @@ -0,0 +1,7 @@ + + + + + + \ No newline at end of file diff --git a/.idea/modules.xml b/.idea/modules.xml new file mode 100644 index 0000000..a9af51d --- /dev/null +++ b/.idea/modules.xml @@ -0,0 +1,8 @@ + + + + + + + + \ No newline at end of file diff --git a/.idea/vcs.xml b/.idea/vcs.xml new file mode 100644 index 0000000..94a25f7 --- /dev/null +++ b/.idea/vcs.xml @@ -0,0 +1,6 @@ + + + + + + \ No newline at end of file diff --git a/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/AUTHORS b/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/AUTHORS new file mode 100644 index 0000000..1f14fe0 --- /dev/null +++ b/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/AUTHORS @@ -0,0 +1,49 @@ +Behold, mortal, the origins of Beautiful Soup... +================================================ + +Leonard Richardson is the primary maintainer. + +Aaron DeVore and Isaac Muse have made significant contributions to the +code base. + +Mark Pilgrim provided the encoding detection code that forms the base +of UnicodeDammit. + +Thomas Kluyver and Ezio Melotti finished the work of getting Beautiful +Soup 4 working under Python 3. + +Simon Willison wrote soupselect, which was used to make Beautiful Soup +support CSS selectors. Isaac Muse wrote SoupSieve, which made it +possible to _remove_ the CSS selector code from Beautiful Soup. + +Sam Ruby helped with a lot of edge cases. + +Jonathan Ellis was awarded the prestigious Beau Potage D'Or for his +work in solving the nestable tags conundrum. + +An incomplete list of people have contributed patches to Beautiful +Soup: + + Istvan Albert, Andrew Lin, Anthony Baxter, Oliver Beattie, Andrew +Boyko, Tony Chang, Francisco Canas, "Delong", Zephyr Fang, Fuzzy, +Roman Gaufman, Yoni Gilad, Richie Hindle, Toshihiro Kamiya, Peteris +Krumins, Kent Johnson, Marek Kapolka, Andreas Kostyrka, Roel Kramer, +Ben Last, Robert Leftwich, Stefaan Lippens, "liquider", Staffan +Malmgren, Ksenia Marasanova, JP Moins, Adam Monsen, John Nagle, "Jon", +Ed Oskiewicz, Martijn Peters, Greg Phillips, Giles Radford, Stefano +Revera, Arthur Rudolph, Marko Samastur, James Salter, Jouni Seppänen, +Alexander Schmolck, Tim Shirley, Geoffrey Sneddon, Ville Skyttä, +"Vikas", Jens Svalgaard, Andy Theyers, Eric Weiser, Glyn Webster, John +Wiseman, Paul Wright, Danny Yoo + +An incomplete list of people who made suggestions or found bugs or +found ways to break Beautiful Soup: + + Hanno Böck, Matteo Bertini, Chris Curvey, Simon Cusack, Bruce Eckel, + Matt Ernst, Michael Foord, Tom Harris, Bill de hOra, Donald Howes, + Matt Patterson, Scott Roberts, Steve Strassmann, Mike Williams, + warchild at redho dot com, Sami Kuisma, Carlos Rocha, Bob Hutchison, + Joren Mc, Michal Migurski, John Kleven, Tim Heaney, Tripp Lilley, Ed + Summers, Dennis Sutch, Chris Smith, Aaron Swartz, Stuart + Turner, Greg Edwards, Kevin J Kalupson, Nikos Kouremenos, Artur de + Sousa Rocha, Yichun Wei, Per Vognsen diff --git a/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/COPYING.txt b/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/COPYING.txt new file mode 100644 index 0000000..fb6ae69 --- /dev/null +++ b/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/COPYING.txt @@ -0,0 +1,27 @@ +Beautiful Soup is made available under the MIT license: + + Copyright (c) 2004-2017 Leonard Richardson + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + "Software"), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be + included in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS + BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN + ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN + CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + SOFTWARE. + +Beautiful Soup incorporates code from the html5lib library, which is +also made available under the MIT license. Copyright (c) 2006-2013 +James Graham and other contributors diff --git a/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/INSTALLER b/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/INSTALLER new file mode 100644 index 0000000..a1b589e --- /dev/null +++ b/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/INSTALLER @@ -0,0 +1 @@ +pip diff --git a/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/LICENSE b/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/LICENSE new file mode 100644 index 0000000..4c068ba --- /dev/null +++ b/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/LICENSE @@ -0,0 +1,30 @@ +Beautiful Soup is made available under the MIT license: + + Copyright (c) 2004-2019 Leonard Richardson + + Permission is hereby granted, free of charge, to any person obtaining + a copy of this software and associated documentation files (the + "Software"), to deal in the Software without restriction, including + without limitation the rights to use, copy, modify, merge, publish, + distribute, sublicense, and/or sell copies of the Software, and to + permit persons to whom the Software is furnished to do so, subject to + the following conditions: + + The above copyright notice and this permission notice shall be + included in all copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, + EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND + NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS + BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN + ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN + CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + SOFTWARE. + +Beautiful Soup incorporates code from the html5lib library, which is +also made available under the MIT license. Copyright (c) 2006-2013 +James Graham and other contributors + +Beautiful Soup depends on the soupsieve library, which is also made +available under the MIT license. Copyright (c) 2018 Isaac Muse diff --git a/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/METADATA b/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/METADATA new file mode 100644 index 0000000..1b4a564 --- /dev/null +++ b/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/METADATA @@ -0,0 +1,131 @@ +Metadata-Version: 2.1 +Name: beautifulsoup4 +Version: 4.9.1 +Summary: Screen-scraping library +Home-page: http://www.crummy.com/software/BeautifulSoup/bs4/ +Author: Leonard Richardson +Author-email: leonardr@segfault.org +License: MIT +Download-URL: http://www.crummy.com/software/BeautifulSoup/bs4/download/ +Platform: UNKNOWN +Classifier: Development Status :: 5 - Production/Stable +Classifier: Intended Audience :: Developers +Classifier: License :: OSI Approved :: MIT License +Classifier: Programming Language :: Python +Classifier: Programming Language :: Python :: 2.7 +Classifier: Programming Language :: Python :: 3 +Classifier: Topic :: Text Processing :: Markup :: HTML +Classifier: Topic :: Text Processing :: Markup :: XML +Classifier: Topic :: Text Processing :: Markup :: SGML +Classifier: Topic :: Software Development :: Libraries :: Python Modules +Description-Content-Type: text/markdown +Requires-Dist: soupsieve (>1.2) +Provides-Extra: html5lib +Requires-Dist: html5lib ; extra == 'html5lib' +Provides-Extra: lxml +Requires-Dist: lxml ; extra == 'lxml' + +Beautiful Soup is a library that makes it easy to scrape information +from web pages. It sits atop an HTML or XML parser, providing Pythonic +idioms for iterating, searching, and modifying the parse tree. + +# Quick start + +``` +>>> from bs4 import BeautifulSoup +>>> soup = BeautifulSoup("

SomebadHTML") +>>> print soup.prettify() + + +

+Some + +bad + +HTML + + +

+ + +>>> soup.find(text="bad") +u'bad' +>>> soup.i +HTML +# +>>> soup = BeautifulSoup("SomebadXML", "xml") +# +>>> print soup.prettify() + + +Some + +bad + +XML + + +``` + +To go beyond the basics, [comprehensive documentation is available](http://www.crummy.com/software/BeautifulSoup/bs4/doc/). + +# Links + +* [Homepage](http://www.crummy.com/software/BeautifulSoup/bs4/) +* [Documentation](http://www.crummy.com/software/BeautifulSoup/bs4/doc/) +* [Discussion group](http://groups.google.com/group/beautifulsoup/) +* [Development](https://code.launchpad.net/beautifulsoup/) +* [Bug tracker](https://bugs.launchpad.net/beautifulsoup/) +* [Complete changelog](https://bazaar.launchpad.net/~leonardr/beautifulsoup/bs4/view/head:/CHANGELOG) + +# Note on Python 2 sunsetting + +Since 2012, Beautiful Soup has been developed as a Python 2 library +which is automatically converted to Python 3 code as necessary. This +makes it impossible to take advantage of some features of Python +3. + +For this reason, I plan to discontinue Beautiful Soup's Python 2 +support at some point after December 31, 2020: one year after the +sunset date for Python 2 itself. Beyond that point, new Beautiful Soup +development will exclusively target Python 3. Of course, older +releases of Beautiful Soup, which support both versions, will continue +to be available. + +# Supporting the project + +If you use Beautiful Soup as part of your professional work, please consider a +[Tidelift subscription](https://tidelift.com/subscription/pkg/pypi-beautifulsoup4?utm_source=pypi-beautifulsoup4&utm_medium=referral&utm_campaign=readme). +This will support many of the free software projects your organization +depends on, not just Beautiful Soup. + +If you use Beautiful Soup for personal projects, the best way to say +thank you is to read +[Tool Safety](https://www.crummy.com/software/BeautifulSoup/zine/), a zine I +wrote about what Beautiful Soup has taught me about software +development. + +# Building the documentation + +The bs4/doc/ directory contains full documentation in Sphinx +format. Run `make html` in that directory to create HTML +documentation. + +# Running the unit tests + +Beautiful Soup supports unit test discovery from the project root directory: + +``` +$ nosetests +``` + +``` +$ python -m unittest discover -s bs4 +``` + +If you checked out the source tree, you should see a script in the +home directory called test-all-versions. This script will run the unit +tests under Python 2, then create a temporary Python 3 conversion of +the source and run the unit tests again under Python 3. + + diff --git a/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/RECORD b/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/RECORD new file mode 100644 index 0000000..fa583fc --- /dev/null +++ b/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/RECORD @@ -0,0 +1,44 @@ +beautifulsoup4-4.9.1.dist-info/AUTHORS,sha256=uSIdbrBb1sobdXl7VrlUvuvim2dN9kF3MH4Edn0WKGE,2176 +beautifulsoup4-4.9.1.dist-info/COPYING.txt,sha256=pH6lEjYJhGT-C09Vl0NZC1MwVtngD0nsv4Apn6tH4jE,1315 +beautifulsoup4-4.9.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 +beautifulsoup4-4.9.1.dist-info/LICENSE,sha256=ynIn3bnu1syAnhV_Z7Ag543eBjJAAB0RhW-FxJy25CM,1447 +beautifulsoup4-4.9.1.dist-info/METADATA,sha256=cIvKNOIcfpYqN5nITv3UDCtXZF1l_B5cPwpib-Gkjk8,4062 +beautifulsoup4-4.9.1.dist-info/RECORD,, +beautifulsoup4-4.9.1.dist-info/WHEEL,sha256=g4nMs7d-Xl9-xC9XovUrsDHGXt-FT0E17Yqo92DEfvY,92 +beautifulsoup4-4.9.1.dist-info/top_level.txt,sha256=H8VT-IuPWLzQqwG9_eChjXDJ1z0H9RRebdSR90Bjnkw,4 +bs4/__init__.py,sha256=ZM_Revj0Yipo_FRYdxS-rksn3MFXzR3XRyFirnM-h84,31551 +bs4/__pycache__/__init__.cpython-36.pyc,, +bs4/__pycache__/dammit.cpython-36.pyc,, +bs4/__pycache__/diagnose.cpython-36.pyc,, +bs4/__pycache__/element.cpython-36.pyc,, +bs4/__pycache__/formatter.cpython-36.pyc,, +bs4/__pycache__/testing.cpython-36.pyc,, +bs4/builder/__init__.py,sha256=VgrBobApHGLnuA8VCJuumZDP64UiGEzlxJJgLiV--sU,19841 +bs4/builder/__pycache__/__init__.cpython-36.pyc,, +bs4/builder/__pycache__/_html5lib.cpython-36.pyc,, +bs4/builder/__pycache__/_htmlparser.cpython-36.pyc,, +bs4/builder/__pycache__/_lxml.cpython-36.pyc,, +bs4/builder/_html5lib.py,sha256=hDxlzVrAku_eU7zEt4gZ-sAXzG58GvkLfMz6P4zUqoA,18748 +bs4/builder/_htmlparser.py,sha256=PEKGvBcJcf6_78CVwA1_uLxulnmnlRuWH0W2caTzpKk,18406 +bs4/builder/_lxml.py,sha256=e4w91RZi3NII_QYe2e1-EiN_BxQtgJPSRwQ8Xgz41ZA,12234 +bs4/dammit.py,sha256=k_XPB3kbZsHM01ckf9BxCUB2Eu2dIQ3d3DDt7UEv9RA,34130 +bs4/diagnose.py,sha256=HkiiFUWS9KU3sLILDYm8X-Tu0wZRuTdMClqtXPd99go,7761 +bs4/element.py,sha256=N6UNaAICZ0BjUl2VnZWZJz7b-v9W50R1YrCmaZbXw_0,81066 +bs4/formatter.py,sha256=Wayv1d6fUc9BSCa2k9uhvWwm89xCukdtJhyi9Sxvkuc,5654 +bs4/testing.py,sha256=xpDhC4AQaVrvNVog1Lgg8nUtg0Nack0CyEVD-bjAAj0,44897 +bs4/tests/__init__.py,sha256=bdUBDE750n7qNEfue7-3a1fBaUxJlvZMkvJvZa-lbYs,27 +bs4/tests/__pycache__/__init__.cpython-36.pyc,, +bs4/tests/__pycache__/test_builder_registry.cpython-36.pyc,, +bs4/tests/__pycache__/test_docs.cpython-36.pyc,, +bs4/tests/__pycache__/test_html5lib.cpython-36.pyc,, +bs4/tests/__pycache__/test_htmlparser.cpython-36.pyc,, +bs4/tests/__pycache__/test_lxml.cpython-36.pyc,, +bs4/tests/__pycache__/test_soup.cpython-36.pyc,, +bs4/tests/__pycache__/test_tree.cpython-36.pyc,, +bs4/tests/test_builder_registry.py,sha256=pllfRpArh9TYhjjRUiu1wITr9Ryyv4hiaAtRjij-k4E,5582 +bs4/tests/test_docs.py,sha256=FXfz2bGL4Xe0q6duwpmg9hmFiZuU4DVJPNZ0hTb6aH4,1067 +bs4/tests/test_html5lib.py,sha256=eWnLGHek_RO_TMq0Ixpb1RF3BEDrvhenMf2eaEBjjsg,6754 +bs4/tests/test_htmlparser.py,sha256=3294XvFbWVe0AYoTlnLPEDW_a0Om0BKRcsrwlJbxUaI,3941 +bs4/tests/test_lxml.py,sha256=xJr8eDrtHSb_vQw88lYEKyfdM1Hel4-dBaz14vQq78M,4105 +bs4/tests/test_soup.py,sha256=EhE1dhHKyctNu0y2l0ql6FOHg9qliEt8Kh7jfCx1lDw,29303 +bs4/tests/test_tree.py,sha256=W75j1-aDx8qHWzOr_JtRCjDE83nUShFauEzGfoys2k0,88988 diff --git a/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/WHEEL b/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/WHEEL new file mode 100644 index 0000000..b552003 --- /dev/null +++ b/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/WHEEL @@ -0,0 +1,5 @@ +Wheel-Version: 1.0 +Generator: bdist_wheel (0.34.2) +Root-Is-Purelib: true +Tag: py3-none-any + diff --git a/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/top_level.txt b/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/top_level.txt new file mode 100644 index 0000000..1315442 --- /dev/null +++ b/venv/Lib/site-packages/beautifulsoup4-4.9.1.dist-info/top_level.txt @@ -0,0 +1 @@ +bs4 diff --git a/venv/Lib/site-packages/bs4/__init__.py b/venv/Lib/site-packages/bs4/__init__.py new file mode 100644 index 0000000..afaca71 --- /dev/null +++ b/venv/Lib/site-packages/bs4/__init__.py @@ -0,0 +1,777 @@ +"""Beautiful Soup Elixir and Tonic - "The Screen-Scraper's Friend". + +http://www.crummy.com/software/BeautifulSoup/ + +Beautiful Soup uses a pluggable XML or HTML parser to parse a +(possibly invalid) document into a tree representation. Beautiful Soup +provides methods and Pythonic idioms that make it easy to navigate, +search, and modify the parse tree. + +Beautiful Soup works with Python 2.7 and up. It works better if lxml +and/or html5lib is installed. + +For more than you ever wanted to know about Beautiful Soup, see the +documentation: http://www.crummy.com/software/BeautifulSoup/bs4/doc/ +""" + +__author__ = "Leonard Richardson (leonardr@segfault.org)" +__version__ = "4.9.1" +__copyright__ = "Copyright (c) 2004-2020 Leonard Richardson" +# Use of this source code is governed by the MIT license. +__license__ = "MIT" + +__all__ = ['BeautifulSoup'] + +import os +import re +import sys +import traceback +import warnings + +from .builder import builder_registry, ParserRejectedMarkup +from .dammit import UnicodeDammit +from .element import ( + CData, + Comment, + DEFAULT_OUTPUT_ENCODING, + Declaration, + Doctype, + NavigableString, + PageElement, + ProcessingInstruction, + PYTHON_SPECIFIC_ENCODINGS, + ResultSet, + Script, + Stylesheet, + SoupStrainer, + Tag, + TemplateString, + ) + +# The very first thing we do is give a useful error if someone is +# running this code under Python 3 without converting it. +'You are trying to run the Python 2 version of Beautiful Soup under Python 3. This will not work.'!='You need to convert the code, either by installing it (`python setup.py install`) or by running 2to3 (`2to3 -w bs4`).' + +# Define some custom warnings. +class GuessedAtParserWarning(UserWarning): + """The warning issued when BeautifulSoup has to guess what parser to + use -- probably because no parser was specified in the constructor. + """ + +class MarkupResemblesLocatorWarning(UserWarning): + """The warning issued when BeautifulSoup is given 'markup' that + actually looks like a resource locator -- a URL or a path to a file + on disk. + """ + + +class BeautifulSoup(Tag): + """A data structure representing a parsed HTML or XML document. + + Most of the methods you'll call on a BeautifulSoup object are inherited from + PageElement or Tag. + + Internally, this class defines the basic interface called by the + tree builders when converting an HTML/XML document into a data + structure. The interface abstracts away the differences between + parsers. To write a new tree builder, you'll need to understand + these methods as a whole. + + These methods will be called by the BeautifulSoup constructor: + * reset() + * feed(markup) + + The tree builder may call these methods from its feed() implementation: + * handle_starttag(name, attrs) # See note about return value + * handle_endtag(name) + * handle_data(data) # Appends to the current data node + * endData(containerClass) # Ends the current data node + + No matter how complicated the underlying parser is, you should be + able to build a tree using 'start tag' events, 'end tag' events, + 'data' events, and "done with data" events. + + If you encounter an empty-element tag (aka a self-closing tag, + like HTML's
tag), call handle_starttag and then + handle_endtag. + """ + + # Since BeautifulSoup subclasses Tag, it's possible to treat it as + # a Tag with a .name. This name makes it clear the BeautifulSoup + # object isn't a real markup tag. + ROOT_TAG_NAME = '[document]' + + # If the end-user gives no indication which tree builder they + # want, look for one with these features. + DEFAULT_BUILDER_FEATURES = ['html', 'fast'] + + # A string containing all ASCII whitespace characters, used in + # endData() to detect data chunks that seem 'empty'. + ASCII_SPACES = '\x20\x0a\x09\x0c\x0d' + + NO_PARSER_SPECIFIED_WARNING = "No parser was explicitly specified, so I'm using the best available %(markup_type)s parser for this system (\"%(parser)s\"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.\n\nThe code that caused this warning is on line %(line_number)s of the file %(filename)s. To get rid of this warning, pass the additional argument 'features=\"%(parser)s\"' to the BeautifulSoup constructor.\n" + + def __init__(self, markup="", features=None, builder=None, + parse_only=None, from_encoding=None, exclude_encodings=None, + element_classes=None, **kwargs): + """Constructor. + + :param markup: A string or a file-like object representing + markup to be parsed. + + :param features: Desirable features of the parser to be + used. This may be the name of a specific parser ("lxml", + "lxml-xml", "html.parser", or "html5lib") or it may be the + type of markup to be used ("html", "html5", "xml"). It's + recommended that you name a specific parser, so that + Beautiful Soup gives you the same results across platforms + and virtual environments. + + :param builder: A TreeBuilder subclass to instantiate (or + instance to use) instead of looking one up based on + `features`. You only need to use this if you've implemented a + custom TreeBuilder. + + :param parse_only: A SoupStrainer. Only parts of the document + matching the SoupStrainer will be considered. This is useful + when parsing part of a document that would otherwise be too + large to fit into memory. + + :param from_encoding: A string indicating the encoding of the + document to be parsed. Pass this in if Beautiful Soup is + guessing wrongly about the document's encoding. + + :param exclude_encodings: A list of strings indicating + encodings known to be wrong. Pass this in if you don't know + the document's encoding but you know Beautiful Soup's guess is + wrong. + + :param element_classes: A dictionary mapping BeautifulSoup + classes like Tag and NavigableString, to other classes you'd + like to be instantiated instead as the parse tree is + built. This is useful for subclassing Tag or NavigableString + to modify default behavior. + + :param kwargs: For backwards compatibility purposes, the + constructor accepts certain keyword arguments used in + Beautiful Soup 3. None of these arguments do anything in + Beautiful Soup 4; they will result in a warning and then be + ignored. + + Apart from this, any keyword arguments passed into the + BeautifulSoup constructor are propagated to the TreeBuilder + constructor. This makes it possible to configure a + TreeBuilder by passing in arguments, not just by saying which + one to use. + """ + if 'convertEntities' in kwargs: + del kwargs['convertEntities'] + warnings.warn( + "BS4 does not respect the convertEntities argument to the " + "BeautifulSoup constructor. Entities are always converted " + "to Unicode characters.") + + if 'markupMassage' in kwargs: + del kwargs['markupMassage'] + warnings.warn( + "BS4 does not respect the markupMassage argument to the " + "BeautifulSoup constructor. The tree builder is responsible " + "for any necessary markup massage.") + + if 'smartQuotesTo' in kwargs: + del kwargs['smartQuotesTo'] + warnings.warn( + "BS4 does not respect the smartQuotesTo argument to the " + "BeautifulSoup constructor. Smart quotes are always converted " + "to Unicode characters.") + + if 'selfClosingTags' in kwargs: + del kwargs['selfClosingTags'] + warnings.warn( + "BS4 does not respect the selfClosingTags argument to the " + "BeautifulSoup constructor. The tree builder is responsible " + "for understanding self-closing tags.") + + if 'isHTML' in kwargs: + del kwargs['isHTML'] + warnings.warn( + "BS4 does not respect the isHTML argument to the " + "BeautifulSoup constructor. Suggest you use " + "features='lxml' for HTML and features='lxml-xml' for " + "XML.") + + def deprecated_argument(old_name, new_name): + if old_name in kwargs: + warnings.warn( + 'The "%s" argument to the BeautifulSoup constructor ' + 'has been renamed to "%s."' % (old_name, new_name)) + value = kwargs[old_name] + del kwargs[old_name] + return value + return None + + parse_only = parse_only or deprecated_argument( + "parseOnlyThese", "parse_only") + + from_encoding = from_encoding or deprecated_argument( + "fromEncoding", "from_encoding") + + if from_encoding and isinstance(markup, str): + warnings.warn("You provided Unicode markup but also provided a value for from_encoding. Your from_encoding will be ignored.") + from_encoding = None + + self.element_classes = element_classes or dict() + + # We need this information to track whether or not the builder + # was specified well enough that we can omit the 'you need to + # specify a parser' warning. + original_builder = builder + original_features = features + + if isinstance(builder, type): + # A builder class was passed in; it needs to be instantiated. + builder_class = builder + builder = None + elif builder is None: + if isinstance(features, str): + features = [features] + if features is None or len(features) == 0: + features = self.DEFAULT_BUILDER_FEATURES + builder_class = builder_registry.lookup(*features) + if builder_class is None: + raise FeatureNotFound( + "Couldn't find a tree builder with the features you " + "requested: %s. Do you need to install a parser library?" + % ",".join(features)) + + # At this point either we have a TreeBuilder instance in + # builder, or we have a builder_class that we can instantiate + # with the remaining **kwargs. + if builder is None: + builder = builder_class(**kwargs) + if not original_builder and not ( + original_features == builder.NAME or + original_features in builder.ALTERNATE_NAMES + ): + if builder.is_xml: + markup_type = "XML" + else: + markup_type = "HTML" + + # This code adapted from warnings.py so that we get the same line + # of code as our warnings.warn() call gets, even if the answer is wrong + # (as it may be in a multithreading situation). + caller = None + try: + caller = sys._getframe(1) + except ValueError: + pass + if caller: + globals = caller.f_globals + line_number = caller.f_lineno + else: + globals = sys.__dict__ + line_number= 1 + filename = globals.get('__file__') + if filename: + fnl = filename.lower() + if fnl.endswith((".pyc", ".pyo")): + filename = filename[:-1] + if filename: + # If there is no filename at all, the user is most likely in a REPL, + # and the warning is not necessary. + values = dict( + filename=filename, + line_number=line_number, + parser=builder.NAME, + markup_type=markup_type + ) + warnings.warn( + self.NO_PARSER_SPECIFIED_WARNING % values, + GuessedAtParserWarning, stacklevel=2 + ) + else: + if kwargs: + warnings.warn("Keyword arguments to the BeautifulSoup constructor will be ignored. These would normally be passed into the TreeBuilder constructor, but a TreeBuilder instance was passed in as `builder`.") + + self.builder = builder + self.is_xml = builder.is_xml + self.known_xml = self.is_xml + self._namespaces = dict() + self.parse_only = parse_only + + self.builder.initialize_soup(self) + + if hasattr(markup, 'read'): # It's a file-type object. + markup = markup.read() + elif len(markup) <= 256 and ( + (isinstance(markup, bytes) and not b'<' in markup) + or (isinstance(markup, str) and not '<' in markup) + ): + # Print out warnings for a couple beginner problems + # involving passing non-markup to Beautiful Soup. + # Beautiful Soup will still parse the input as markup, + # just in case that's what the user really wants. + if (isinstance(markup, str) + and not os.path.supports_unicode_filenames): + possible_filename = markup.encode("utf8") + else: + possible_filename = markup + is_file = False + try: + is_file = os.path.exists(possible_filename) + except Exception as e: + # This is almost certainly a problem involving + # characters not valid in filenames on this + # system. Just let it go. + pass + if is_file: + warnings.warn( + '"%s" looks like a filename, not markup. You should' + ' probably open this file and pass the filehandle into' + ' Beautiful Soup.' % self._decode_markup(markup), + MarkupResemblesLocatorWarning + ) + self._check_markup_is_url(markup) + + rejections = [] + success = False + for (self.markup, self.original_encoding, self.declared_html_encoding, + self.contains_replacement_characters) in ( + self.builder.prepare_markup( + markup, from_encoding, exclude_encodings=exclude_encodings)): + self.reset() + try: + self._feed() + success = True + break + except ParserRejectedMarkup as e: + rejections.append(e) + pass + + if not success: + other_exceptions = [str(e) for e in rejections] + raise ParserRejectedMarkup( + "The markup you provided was rejected by the parser. Trying a different parser or a different encoding may help.\n\nOriginal exception(s) from parser:\n " + "\n ".join(other_exceptions) + ) + + # Clear out the markup and remove the builder's circular + # reference to this object. + self.markup = None + self.builder.soup = None + + def __copy__(self): + """Copy a BeautifulSoup object by converting the document to a string and parsing it again.""" + copy = type(self)( + self.encode('utf-8'), builder=self.builder, from_encoding='utf-8' + ) + + # Although we encoded the tree to UTF-8, that may not have + # been the encoding of the original markup. Set the copy's + # .original_encoding to reflect the original object's + # .original_encoding. + copy.original_encoding = self.original_encoding + return copy + + def __getstate__(self): + # Frequently a tree builder can't be pickled. + d = dict(self.__dict__) + if 'builder' in d and not self.builder.picklable: + d['builder'] = None + return d + + @classmethod + def _decode_markup(cls, markup): + """Ensure `markup` is bytes so it's safe to send into warnings.warn. + + TODO: warnings.warn had this problem back in 2010 but it might not + anymore. + """ + if isinstance(markup, bytes): + decoded = markup.decode('utf-8', 'replace') + else: + decoded = markup + return decoded + + @classmethod + def _check_markup_is_url(cls, markup): + """Error-handling method to raise a warning if incoming markup looks + like a URL. + + :param markup: A string. + """ + if isinstance(markup, bytes): + space = b' ' + cant_start_with = (b"http:", b"https:") + elif isinstance(markup, str): + space = ' ' + cant_start_with = ("http:", "https:") + else: + return + + if any(markup.startswith(prefix) for prefix in cant_start_with): + if not space in markup: + warnings.warn( + '"%s" looks like a URL. Beautiful Soup is not an' + ' HTTP client. You should probably use an HTTP client like' + ' requests to get the document behind the URL, and feed' + ' that document to Beautiful Soup.' % cls._decode_markup( + markup + ), + MarkupResemblesLocatorWarning + ) + + def _feed(self): + """Internal method that parses previously set markup, creating a large + number of Tag and NavigableString objects. + """ + # Convert the document to Unicode. + self.builder.reset() + + self.builder.feed(self.markup) + # Close out any unfinished strings and close all the open tags. + self.endData() + while self.currentTag.name != self.ROOT_TAG_NAME: + self.popTag() + + def reset(self): + """Reset this object to a state as though it had never parsed any + markup. + """ + Tag.__init__(self, self, self.builder, self.ROOT_TAG_NAME) + self.hidden = 1 + self.builder.reset() + self.current_data = [] + self.currentTag = None + self.tagStack = [] + self.preserve_whitespace_tag_stack = [] + self.string_container_stack = [] + self.pushTag(self) + + def new_tag(self, name, namespace=None, nsprefix=None, attrs={}, + sourceline=None, sourcepos=None, **kwattrs): + """Create a new Tag associated with this BeautifulSoup object. + + :param name: The name of the new Tag. + :param namespace: The URI of the new Tag's XML namespace, if any. + :param prefix: The prefix for the new Tag's XML namespace, if any. + :param attrs: A dictionary of this Tag's attribute values; can + be used instead of `kwattrs` for attributes like 'class' + that are reserved words in Python. + :param sourceline: The line number where this tag was + (purportedly) found in its source document. + :param sourcepos: The character position within `sourceline` where this + tag was (purportedly) found. + :param kwattrs: Keyword arguments for the new Tag's attribute values. + + """ + kwattrs.update(attrs) + return self.element_classes.get(Tag, Tag)( + None, self.builder, name, namespace, nsprefix, kwattrs, + sourceline=sourceline, sourcepos=sourcepos + ) + + def string_container(self, base_class=None): + container = base_class or NavigableString + + # There may be a general override of NavigableString. + container = self.element_classes.get( + container, container + ) + + # On top of that, we may be inside a tag that needs a special + # container class. + if self.string_container_stack: + container = self.builder.string_containers.get( + self.string_container_stack[-1].name, container + ) + return container + + def new_string(self, s, subclass=None): + """Create a new NavigableString associated with this BeautifulSoup + object. + """ + container = self.string_container(subclass) + return container(s) + + def insert_before(self, successor): + """This method is part of the PageElement API, but `BeautifulSoup` doesn't implement + it because there is nothing before or after it in the parse tree. + """ + raise NotImplementedError("BeautifulSoup objects don't support insert_before().") + + def insert_after(self, successor): + """This method is part of the PageElement API, but `BeautifulSoup` doesn't implement + it because there is nothing before or after it in the parse tree. + """ + raise NotImplementedError("BeautifulSoup objects don't support insert_after().") + + def popTag(self): + """Internal method called by _popToTag when a tag is closed.""" + tag = self.tagStack.pop() + if self.preserve_whitespace_tag_stack and tag == self.preserve_whitespace_tag_stack[-1]: + self.preserve_whitespace_tag_stack.pop() + if self.string_container_stack and tag == self.string_container_stack[-1]: + self.string_container_stack.pop() + #print("Pop", tag.name) + if self.tagStack: + self.currentTag = self.tagStack[-1] + return self.currentTag + + def pushTag(self, tag): + """Internal method called by handle_starttag when a tag is opened.""" + #print("Push", tag.name) + if self.currentTag is not None: + self.currentTag.contents.append(tag) + self.tagStack.append(tag) + self.currentTag = self.tagStack[-1] + if tag.name in self.builder.preserve_whitespace_tags: + self.preserve_whitespace_tag_stack.append(tag) + if tag.name in self.builder.string_containers: + self.string_container_stack.append(tag) + + def endData(self, containerClass=None): + """Method called by the TreeBuilder when the end of a data segment + occurs. + """ + containerClass = self.string_container(containerClass) + + if self.current_data: + current_data = ''.join(self.current_data) + # If whitespace is not preserved, and this string contains + # nothing but ASCII spaces, replace it with a single space + # or newline. + if not self.preserve_whitespace_tag_stack: + strippable = True + for i in current_data: + if i not in self.ASCII_SPACES: + strippable = False + break + if strippable: + if '\n' in current_data: + current_data = '\n' + else: + current_data = ' ' + + # Reset the data collector. + self.current_data = [] + + # Should we add this string to the tree at all? + if self.parse_only and len(self.tagStack) <= 1 and \ + (not self.parse_only.text or \ + not self.parse_only.search(current_data)): + return + + o = containerClass(current_data) + self.object_was_parsed(o) + + def object_was_parsed(self, o, parent=None, most_recent_element=None): + """Method called by the TreeBuilder to integrate an object into the parse tree.""" + if parent is None: + parent = self.currentTag + if most_recent_element is not None: + previous_element = most_recent_element + else: + previous_element = self._most_recent_element + + next_element = previous_sibling = next_sibling = None + if isinstance(o, Tag): + next_element = o.next_element + next_sibling = o.next_sibling + previous_sibling = o.previous_sibling + if previous_element is None: + previous_element = o.previous_element + + fix = parent.next_element is not None + + o.setup(parent, previous_element, next_element, previous_sibling, next_sibling) + + self._most_recent_element = o + parent.contents.append(o) + + # Check if we are inserting into an already parsed node. + if fix: + self._linkage_fixer(parent) + + def _linkage_fixer(self, el): + """Make sure linkage of this fragment is sound.""" + + first = el.contents[0] + child = el.contents[-1] + descendant = child + + if child is first and el.parent is not None: + # Parent should be linked to first child + el.next_element = child + # We are no longer linked to whatever this element is + prev_el = child.previous_element + if prev_el is not None and prev_el is not el: + prev_el.next_element = None + # First child should be linked to the parent, and no previous siblings. + child.previous_element = el + child.previous_sibling = None + + # We have no sibling as we've been appended as the last. + child.next_sibling = None + + # This index is a tag, dig deeper for a "last descendant" + if isinstance(child, Tag) and child.contents: + descendant = child._last_descendant(False) + + # As the final step, link last descendant. It should be linked + # to the parent's next sibling (if found), else walk up the chain + # and find a parent with a sibling. It should have no next sibling. + descendant.next_element = None + descendant.next_sibling = None + target = el + while True: + if target is None: + break + elif target.next_sibling is not None: + descendant.next_element = target.next_sibling + target.next_sibling.previous_element = child + break + target = target.parent + + def _popToTag(self, name, nsprefix=None, inclusivePop=True): + """Pops the tag stack up to and including the most recent + instance of the given tag. + + :param name: Pop up to the most recent tag with this name. + :param nsprefix: The namespace prefix that goes with `name`. + :param inclusivePop: It this is false, pops the tag stack up + to but *not* including the most recent instqance of the + given tag. + """ + #print("Popping to %s" % name) + if name == self.ROOT_TAG_NAME: + # The BeautifulSoup object itself can never be popped. + return + + most_recently_popped = None + + stack_size = len(self.tagStack) + for i in range(stack_size - 1, 0, -1): + t = self.tagStack[i] + if (name == t.name and nsprefix == t.prefix): + if inclusivePop: + most_recently_popped = self.popTag() + break + most_recently_popped = self.popTag() + + return most_recently_popped + + def handle_starttag(self, name, namespace, nsprefix, attrs, sourceline=None, + sourcepos=None): + """Called by the tree builder when a new tag is encountered. + + :param name: Name of the tag. + :param nsprefix: Namespace prefix for the tag. + :param attrs: A dictionary of attribute values. + :param sourceline: The line number where this tag was found in its + source document. + :param sourcepos: The character position within `sourceline` where this + tag was found. + + If this method returns None, the tag was rejected by an active + SoupStrainer. You should proceed as if the tag had not occurred + in the document. For instance, if this was a self-closing tag, + don't call handle_endtag. + """ + # print("Start tag %s: %s" % (name, attrs)) + self.endData() + + if (self.parse_only and len(self.tagStack) <= 1 + and (self.parse_only.text + or not self.parse_only.search_tag(name, attrs))): + return None + + tag = self.element_classes.get(Tag, Tag)( + self, self.builder, name, namespace, nsprefix, attrs, + self.currentTag, self._most_recent_element, + sourceline=sourceline, sourcepos=sourcepos + ) + if tag is None: + return tag + if self._most_recent_element is not None: + self._most_recent_element.next_element = tag + self._most_recent_element = tag + self.pushTag(tag) + return tag + + def handle_endtag(self, name, nsprefix=None): + """Called by the tree builder when an ending tag is encountered. + + :param name: Name of the tag. + :param nsprefix: Namespace prefix for the tag. + """ + #print("End tag: " + name) + self.endData() + self._popToTag(name, nsprefix) + + def handle_data(self, data): + """Called by the tree builder when a chunk of textual data is encountered.""" + self.current_data.append(data) + + def decode(self, pretty_print=False, + eventual_encoding=DEFAULT_OUTPUT_ENCODING, + formatter="minimal"): + """Returns a string or Unicode representation of the parse tree + as an HTML or XML document. + + :param pretty_print: If this is True, indentation will be used to + make the document more readable. + :param eventual_encoding: The encoding of the final document. + If this is None, the document will be a Unicode string. + """ + if self.is_xml: + # Print the XML declaration + encoding_part = '' + if eventual_encoding in PYTHON_SPECIFIC_ENCODINGS: + # This is a special Python encoding; it can't actually + # go into an XML document because it means nothing + # outside of Python. + eventual_encoding = None + if eventual_encoding != None: + encoding_part = ' encoding="%s"' % eventual_encoding + prefix = '\n' % encoding_part + else: + prefix = '' + if not pretty_print: + indent_level = None + else: + indent_level = 0 + return prefix + super(BeautifulSoup, self).decode( + indent_level, eventual_encoding, formatter) + +# Aliases to make it easier to get started quickly, e.g. 'from bs4 import _soup' +_s = BeautifulSoup +_soup = BeautifulSoup + +class BeautifulStoneSoup(BeautifulSoup): + """Deprecated interface to an XML parser.""" + + def __init__(self, *args, **kwargs): + kwargs['features'] = 'xml' + warnings.warn( + 'The BeautifulStoneSoup class is deprecated. Instead of using ' + 'it, pass features="xml" into the BeautifulSoup constructor.') + super(BeautifulStoneSoup, self).__init__(*args, **kwargs) + + +class StopParsing(Exception): + """Exception raised by a TreeBuilder if it's unable to continue parsing.""" + pass + +class FeatureNotFound(ValueError): + """Exception raised by the BeautifulSoup constructor if no parser with the + requested features is found. + """ + pass + + +#If this file is run as a script, act as an HTML pretty-printer. +if __name__ == '__main__': + import sys + soup = BeautifulSoup(sys.stdin) + print((soup.prettify())) diff --git a/venv/Lib/site-packages/bs4/__pycache__/__init__.cpython-36.pyc b/venv/Lib/site-packages/bs4/__pycache__/__init__.cpython-36.pyc new file mode 100644 index 0000000000000000000000000000000000000000..fc75f9a23e59897c3fb062c808923715520a7736 GIT binary patch literal 22732 zcmdsf`HviDdR|rE({pl24u^-N(&(6J?M#!RC|k>=4vL4wU1~^khAXMV-JY7Rnwg&I zi(gf9IGvtsup`^bW+l+^da-t#1I{|O6C(i(7)f9wPBsZ(Ao&X_KPE7s06~rg5-dg9J6FPiBiHzmXc1YlycIWW-0AtoNO&q%E&%j%E~@h%E^ADG$Q+aDevT*k=kf| ztTa|1FO8c9&hxd2`ebQRjz?=#^&_Pt_36@d{b=cE{aEQ({dnnkeWo;1KT$eSpDoSS zPnJ&BPnAy9PnS;D&y>#8pDI07f4cN^{h87;sAJ3--$|OrJBBmiOnziIldg4;EIo_! zDdz~zkI4BP&ZnI;&eQJGxc;2`T(5 zoX@%ioWJCL#W}f?cx0-J(TLsW+}!r!fmvE|i|*3)*#qOiM8C2tM&XqGjf7#m?b>a> zy3wv#E6sMxx>l<`tSZZHIM!;jQLR`@*4*lrYpqn2>o%6KvsNcFYVm`m+ z`>l&-&+hK-7AvYUwR@sx}_jwW?FFoMxq6cN;!RaHDPc=&7aL zmU2Cm+J3d!C|ZN9=3A=ypz5IIy6bN>9rOSLzP*QIj=Abon|06fw`||4+dHmR^)1)- z_Slhz{h+#O`|e`ib8S`GT9ilYO{cnn>!6hMSvmpQ3^r@+Hr0-2?NX-yWzVIyS&qA?ppSGv+WO&VbStHDs(y@(NL&$5lwxGDP8w2oW=8J^PQJ& zy3K~I9P3WCvW30ZY*-65U8dgh+|3QUUGs}gwOQz-FBHF8e7W<|m1b*CRX4YMYoSuG z&YwGXVd?z2^XII?RRziSZmb65F-gf!pHVg}e+FS-6cWMIdb?V4Tvb-?X4Ui6UU2NT zaL^rh+pT~a@7Zdn-71*D_+89%({Zob^?KDWOa|#IS8d-8vR9gQ@=|c(>b2{a@7`Q3 z-@3bc`|fJ_+VYiKS8pu86O3MUD>YjQp@QtyX2suYxxwVJum$*c#aGqFW-xl&-gK|k zT&e8%ZPl!};M>OL4NRPBSICsX$=e^UzI$uAymI^6l^fS@T!|Z52}bU?9!Ry~`auS3 zuG;d0{EEL2BbLUfvG)O0Drs{cZ z3}Sc7ZS=?>!f?xm5NKMPtOVB~4Pt?mx9|tqWi2gP5GLyoU3=ELTd`Tz2;~(Qp=Wt5 zw^H4xqKT@eKWL?yps9*d&vGFZjv1T}Vmq-pbvTcP}rm-n-mtk(uw^Y*y^rJ@Uo9)*csReW_(vcI-`% z;NE&9_dT%Eg?k|Gy>hwQsQTqH=0u%9{jZU-P!UEhDd%5`^iS!v20nJ{SQZ{T<2O|_ z13&y?GXSpJtUkcR%-3b<&Qr+wLNvSLgEF9ct=WW7tyLkXZODC$M^#*uN_(6i+q!#4 zbPCjr4bh7!wXKb6&6Rq<|4!B0iDyF1lC&qeImOLsZenulqo-K*G&Zl}r%9BO^8pfN zagl#s4tzSuod7C;E1L;Gl{8@rAj)gbh?D=w1V|Zm#y&CtQpTML?DOuZGwDoy1jSt% z!|@SkT8_sxG$>7INKl#tc*#`>VR9R3fdyu1%K55u(m91%k2n{d)6N+jO*>CHPov(W z&NGgM{W0fRXAb-0&U4Q5*v~j$b6#*>#GNOcOU}HrfTLNb;Jk!uC!N=wMP~^mr<^yO zqH`8Ur=2&QbI!{+I^&#oUcp^YITzfgomV%Kzie!n&d)hN|ItY4nO`^}sQyKFh&Q%f&~He^+-wYmH`TTuUwh5`@`LV#uj3LH1+dJnt)U1o$%d=k26PW(lfMfDCv|8k=AnkBwMzn^frh)QkyF^c zMIs5(F_Q425+KMMjvmSuR(4Eth&dbMY1Z73TMKEat_zu!d;mZD>&2s5IN;cHm_TI?dl(a`gZt!*q-YE9`B$`_?gQ4X{yV7OdbS8wo%!lGuAp$Qi>4x;FlGBA}|MRf9iDH1~8 z3Q`2+L2ASH{7%}+kBp7m{~5v_j82@1T?lt%`v z4hroaf(wl0N=_8LVGuFQ<2Q<*mw<|M%#V$ojGFxzN3R>cu?=s9Sl~%1KZo*USe}yd zw3NS#@^o09(H6Uuuc15}mgl5=M9Lqcd?YN-vs_d!fkbY9q?>9z>I$4VNbIM&>DJ%I z86XAD(%rQBV?WU~{p3LcN9^AbsdpOJvaNsZrw)v+QBHT0`}uBSe-!s7{S5B?k0=>) zruUPLZ+DZ<^iB?5hqrMxAvOJZH?e8%<4j-iX8ddy<93dUcj8Rf*v{=I{E=Fx5rs$OUd!X zPA0sP+@3^_e#x2HA8(|b6Z_-n>%`uKKec_tnLRM~C*C(2GbzKL-pS*>KQImc`_O14 zJ}|lyDEZUw#P-qsNoQN$npctwD#A-J>&xe~O^&6L^XmQy*=F`}yq`gu;J@ zx?gmj2OW+zQeAAF7xs_uP01be-D4;}4hlaH3NL`dNi}C0`!lA|$-ECLgF4guGk;~~ z15NMaxc-)m_(hDQz@CC8FC8Q>dSm-5JU5L^WB&x&n0@#_M!VQGyXkJmng1kt)%e!h zJwBA4Izv6twSp>I&A5I9jI%uuD#O?O!@~h4?0Pk7*$cSNF z=@%KHMz^es5R-Nt0+fDWx!hfcb%& zKp-Gp=qlG1eMnPT*b6!p&R!TSAVyO|9U10Y#1m^pi|(Cf1E3|g0HESULu&*v51k&P zXw;+a6RD=s_I$wYzP=wGqwqkP4RDzK4y~efi_PK&kTX>E5#;KP6L_|=6=A3T$F-jd z3`8vrC$Eb67hnJsfv>w)i`Zc73TtAEnyI7aFkW!Vu7F@cN4r%Jfy~)#_9}rta8u~L z0qzxpt?t4jOPC|%z$RLTutLf($lxw+Z1%19YQrHOkA@T75sp5tB{qKtSG#pvBO8nl zhz+wd#I4mnCKP8D+t~$z+XR`lAwxEf1`UuLb>Q#_+=rD~+i~MTcpOCy-d6TiOE2~! z8cwh4QI($fqu~h?c_qCl9vT4GCX51rS30kIl{}%Ff@}GVxVi^NkI`rcUOyFdvxlZ9 zG$_il%3K*51|m>V?LqdmTBKZDKe?{gx9}nDiCTq8!u3An7l!YkX=#`p-HvY0Ym-`d zMdZDvP2Uoy?@+H|x+F1IvU;=u);}%^4nLIEF}GW^86w%i0kCr@8tPkbBxpViXu!Qe zm%WDYRr)LAyF!lM0bJh%r5J@+hj$UDFlA|Bf}Pi^wW^Pm+*SzNxG)LP^|*q56b-3a zaa&mK6*!S$+FCpA9)cwhDj`0Xl}jr09*?I%i)47ImuW_7fkA5|e$;^nr2(fi$Nm@8 zaN%q0-=3BqTAbr{sJO@?;e_Ij2FTItMs>4+;0si4wA8KM(Pfb+;(DOqWUOfKF+-za z0s$(LV}L^O5XotS^n$kxe&?3WSQL7tt&~22+v`NXv7#OMjA>z!!kw*yX(GFD+eUSh zUbsF%-d7?0SnQ}KVvo9u5)#;k#OEEJEpCL}EkuBNUEsW+ix9p&GLvcYUd*uRFJ-Wsq1112-B8n&s+!;3>jr)moD#A0OYsWJfSq-i+a;H?M=@v$UEIP>H zK7%8@SQ^(b<2?))VVusdKCxp1_kEdetzr%P3{-}yo{}GI)_FS>FIBaIumI-3m$Unfh>)ZiRE$bYTd?t*|IY6U@l2xsjl`Y@)nt!0b$L z|GLpd9LzCbgd~PpNS1QV znj=xHAP0A{K1z#k6d~7)))c(}N^sO+kY3!7ArnRuBYy_+cnT0h_EV2$MHIHy48>Sge#HBP#M1 zBarl@mCK)}5CN>nw#mq3LMo6iGQB4$aQ+`H<#a!p%4X*6e;0UgD{^%Qr zRC%ba1}1bsyG8MWh(HW=U?FK74s~0Xz#m}vkml^SIQlEpj!%2wvPM_1ljx$2>#=aM z7RuzcVvtg>oIZo^!1yU+&6hqmF9oT#zwz4j&aNz_f#73AD6JATU9=q)`qAS^aT6YY zKnVzaJ|jqxT_C!U676@^h#)4D;G81Jd*l!ZU#+lv{NoI8I#=-Xj$yM2LGZ1yeba}K zI52-DaX+=9&Wj5C=!vFApPjTE%7XKlH)7SQP}NRY<5M3{n-b`Rsq=ukdqfjs?$Q6XdIaIgE7yj(&{= zDVhPnz$R1N9IsEBkANd&H^7BH;RBr^u8$iwO&WCnH! zvd|a|mZ?`c>h$^^&`FSJLPs$49-Q*ptyU9ueYvd@a75%#wRk~BEJPRkhu}Mm_gXZC z!9>}C_Xbv{UQ-fk4~~^9TW)11EGvUlZB>KBhnEakFDwo+`jOyBGvtg|3?tGlsXVSz zrU$84{tWOTjSvrcL~!F&But3LB77L6fz(LVGGmDi?RtmkZgc}QkJH+YOq2ND|lgo7y2C&_&tC%a}DKc+KvK47vOMBqRJR_Gjt@fF%Ngt}BD zkw1r4k{x-Li8(0=#f(F2a1)GP?4+U7mR^HU6c#0S7E+2GRQI@%(sh<5)FyU8s=~Cr zURF%R0W2<;cc?BTBnG-1%jOvbNpie8@@ zAT(56@8$t$9{1cr!_>3Bi3r&&A|x`Ch_fXmMuK>l%;WD|Yj`xY*R*U}qwyz_l0o4r zyw)(IH^gE_25aaSh`@G{2Yqgs)mv9@U5q>wo?HcL4@0TM8;JXgV}`Db^XFbZC+L-4 zc1gN~r5&5;us4}U+B4BZwg|GJ94sJAgAu9^un96+BqPf%S*dwi)Ma(qFr8Hy4}2#k zZx)L!0Yrahh`a+bJ|-~141XlD=V0T*#t{M^a}Z*5MiGLXm9hkgzTl)#Hiohklbn5> zgM>8ik8<54gEj}ITI<3(+R3UlKee59@?fmY$GC!!QSahZ^ z*Amh$>0XM6PKRq1W<*`?l1PruNs9>^=B6PFt#9I<5c=s+Ul6DDQ;G`7$Nec44UfEw zpPSa_X({|Xqf3?#Eu~jTFXplV{!Q&8481J?@S6a$_Y-0hn#&^$fZrUfoul##&2>mHPr09Iy8-s(aOcxNQ-Oe{K;YbOak?Un|AAA zY0w*loR4=`S8pTh3Tfx?aMC;*2Gz8+2k+Cq*w99j1RJd2LD`7p1R7hEmuQ|II@;v4 zbm{r;7oqVmLmZw^yCly;Y(TjVkMX*KQONZci3X2JEUX^5l*J$j1~SX%Q>1c`7FY$R z7}BvgcarKN_Boc{pL-vv4iwCt@7xG; z=Hj&zr_jRUQtpFlv+Xe{26Lh%AyRT+a7h@%wj`<&uOXfLLGSJpTFm@1&3eqAO#r&)n7(P8!9hNs?vxn zXB=EFXXQ92+Y#APf{p6nzs`H^ynlxP6)cR^6T*WK129|&+Y2u`v^Tft0;aBL$O{8v zA30b6docy|tn?@2h68u%@@%W^Z9RE5*zO;4Hl}f4q>h=y zjv5hiJ3K3b;VW6*i$9_j&f+)l^Jo-yvyhjPk^|~Ok^4y|ws2myk?zn%8ca7U2?&W# za#ZTJnFM?x_fj=5n-V<*sl8@}Fp@bT?+s#c>qi~OR5VL15ke=upQ)iK@sY!g7B9`P^vdkNg=Do zvFjW3U*{2qeH3OG4-YKttS_utoc2-B2j`3&_2gRkeQVaU^u@ z578*V5s46(aBD@eYxQd=86JP0^QZ^&pEz=M-$>=n&g|fLd$+|i&QZoA9m9za#mk(- zP>ue0l?pTj{y2h$jNdlse^KD_?Hk&|yf)C}nglcH&yUk&;!#8MMsEvYabfwQdf;3M zQLnpLbCCG(c5X1=8TVqza}A1$massNUY!dAjYcpCZL{Uj--ZgHaQ(8bdZADx^gjwG zsOrCwYN_vNO9o4|IcVX7>xRhX)cqfNv?ha zgbGQl{w9uxi8~e&O`iFmeY72NMfBemBTG&L7qKkdZ{vUqml$3gNwAH9Tj*SZxZ2?v z(Pf12lj^!$O*%=rdL37j@l}}otjkHABn~AmMG&Eo?tFC^;rCv7DHAz2q2={!F194; zgUOkjk8FvafD|dIMP*w3HS{`2;!*WI94Qf%2RMq@G6}&xL^JhExc+86o_{MX58jja#DPxxUg;VLC(0C2M$=;lkcvPREFajmn|w_omix^L45L09RXqzbtX>by zyxPN2Xkd+s3REAn|WVX0x#%{6lX30XB#$ zRUZyB%5USsYh;w?abPe!BaDF{OKgmjQSymtvQQFU;AaK}^dZKLrAHPO#?s?o-~^M2 z>0z>q9Hnstgo^1-LK^7FH=QI{=mX-`G8k$*$v0l4>?T-%>~+&gcGI|q_e{!Ae%RvG zTx`>j8c8~Y)k8c7Hy^e+2Mz%+VX~Vl1Nv-loAFCoEzaALIC4n>LRp@K!nz$z!yJP(9 zPyk?nH&l3^T#ii6}tI7i2@x{qUxb33U>)ATg#3KL2mM26;f= zS_TXP`2f#O3eSEUav-~%mZPM)hdVMrN-0cIcyzxA@411W}C|-cNksqAM5iIt3mBGmCc^L0K0+IpBt*Au6m+GZ@Th&;g9Ssa}K16E{4R z8O~U?o$~05fa(Kdrw$T8GOy8kMOkJ$>*xI30iMCx2(D8S5kYB(G2fm_ev-$M$U2ml ztu7G|6d%q;)OX?D#Bq6y`!d|icylc)NSNOaO4&cgV6P0Ip}}l5rb&lym1DhtXPDg~ z@9#2KI-bplU4`S9v5nRTSH_{cVy5Pu;x>*LI`UFI?lJKk9d<(=c4NWaD1Tw6dVZ`g z;sYG;^u;}AeY&3i%U(b6JN6O!20xO3X$ZX;`o(omQsyNsUkjC?v@Ar^kP;065RYdC z=`Abahi8VPa21BlPy-yxlvtFL5?w&}PGB#x)iPA;=4Q^s@rFx7l6-*A9c|W53u<)EQol6{p%$R~QrCpVSG0S8V zDozG9I!_JCk)e8GPhJvL@plEkFcjRACX@#f?$!YJ$3Js1#q?l8%8;*W;LaFv;)$Mra0(}of{?Y^Ob88wcYY?_ z@Mb}~u_JN~XrAxSi)A<@5;?Sw(S+di$lZ~W=-JS)M#IkX+MjYz?52_EoO~=(@UH8n zERqwd$eN8Y6~S~GKeC5c!iGar=ZK~73O5sOW?*{r1?~i*yDtLlBEKBa7bZCNX&Biw zlTGgp+?5fNJJU`3h>{@=32O`16_m|E@&t3wGj5hCVLzZ*=j%$=x9lzKrALEEr zK+j@|#RkPGMS}iP3SFfAA4};IcfDYi??ap8rCJ{dCX|67{ifQC+?j~8Wbs`CZ>}?Y=}jb*MqmDVeeUJrIh4Qo zMqZ4ILRx(tyr9VIFdp0Z)SrBB(vd@BMMf$!wE4A>(s(pvdLev8 zG!_7Nk?Lhh4~H9iq^x5c<+8ReB+G#f{yQ@FDI6H1a4nd*6mb+~A=C3%a@f5W<|JkH z=$H9dV492kPv9i6yo#TkKKs}&Jqeb)hBno2akI*e1iG6%x{pmTDiKDlNOd;H8m~Fr zRJi$7ZiwF0uXFR8-25$W2t>t&rlr_#^OPo(qC8Xol$(FX%^z`-=Z16$#zP7rAX$s3 zM4^CC$)E5hp%|sWRs1||a!(;CG6jkOg#taA`R^DHbK|+}*!b8P>?e?}IFp~sXLA|+ z8_SL5pUa=gE#%MO&KL6eT#D(E97_C0Il({CPyOGrk`GP582%_%O362#(u9Y>bWa%h zU|+(9&Wl&$_dUc_qQ8 z5GVMqAP_Vp%ZIEYp0Eh+kslc7_e7s(KPrA%Csh0 zRa3RhDM6X*bQu-XcP=$@*0T(a)2-6>lEf0Uxgsckzb>y!sOChQiYJyj?XyN=nG+dX z?kpWUPPNRM$gh~SNlNz@R`ARjrFzV(f zM@H>bZrUDqQaLZ4bJC9Iq3v#aSHVVGqfY9|Om^BHN_md5^EouC9DCT!kK21v?m_#a zOm@W0jIZ_BD4yn}vhke`ie8j*J;#-14(Hv2o;`#Puz_do*^LJ(`)PxFc6ZuClXy(Q zR(Z#>bNPbpI)~j%!9mxYI7{YTDac;C&T!UA7w|SY=dkVM(34DVWDAO>3V1$h*my$+ zO=hynadBxZKhz!fLwj}&mPxAp9j4#L{!ipHxx&U=YTQXTW56g_W1*5UYbtXU&N{4qKUV2LN7#>NyiCq3q;hG;4`G=7SUNqLa+Oo?qX2Xo zPd2@K3^$IJ=lAt)-hVL$$J;+}@xar1`Um&-Oib)Z6;k_m=hLa|{y}GaV*kW6n;7q! zNTm;^M)0EhhbpAtff6?DAM!TsS2FG06VrGv{lKKYbRu6P3NXyL88+Qk^A6ulLkn|!F<_-&?Lzw%hs+wu&#Uk1s%+Euit;cw&Gb^QEJ>Nq%OOB+krFKZO3)b-ltJr65S-X1M0Y8%R(wR($<%dTL<5@p6HJ(lLiIsmGfD3VWF9$dV zxSRnFXC^}AFW^~WCp>HH4<#arD7adfoD7^QrkamN5;3QFEUx0SW}-!M;4O((XNhW6 zOOAlL5^c!0sil%%ihR3TCi!;cPg2LJ6-P{zT86We)$wYjoGr)M3F<^Ccbr;F~h?gF=i|k9WiG@Q#%V-Lu1w~-mMfm!PsFO*tp-C3C~0d zp;CC(oQamgs_9^p+h4NA!lme}X&R+S$tXp);wYJ=$X4UfX_pur#j!}qnl)D%GflW} z7NTQKrKVYPn4B`!UtWvn`{C3?-Yb*lis8#n_YU}2sP1x{y2k;KU%@;tbj18{HiMNK z!igUN!x{JdPpG^LIkBw!o6)BuQQF5FY7q-5BBJW^w&uGqrfd#YdBL zz|q85LO9JrJT;8v=zq&tQV{nQ2alqT*zn$+q7+uvdgHxn?`4P`)hQ|_Rl?xK8{e+>UjJ%*>))a_ zx-!^xx7Kjtx9IyA0DW40h`WwNP7zg%>RV6;n_Ooin@T%SAH$xL9rhzaNr|}7P?GE~ z(HfMIvtw3?Cbnd%LTHc>AjRk4FHoi=(P}XyrnfM@6Zq@L{f!sx+*p>dh0-HpcbJ z+HCh!rVs>%bx~uEs@tYY1BvK$VGoc4;f&;}gKk$yLWpxswy;I$#~#5Z2#QRtMq{}V zl&folM9AF{;P`f<4Z#J)NJl52te2om5NKHN!26P0a%9RKEsZ5xXI* z%v^=D)>MI@NY)uH)U}by=5t^slV1H8B+2H2MU54zIVA>qT?bf_S8!6Qj#~udIu6E_ zTsZZc`4WTfB%Xj(10j3N(y4Vt*ToX!rfxXH4Q9@+lA)fr?rL!m_==uA7xnEM=;_~) z+`BKi9YXf$eS_F3O%A!6KvnK}fQnRjGV=cMi}rEh3TUg*0>|Ml)buDyjJBv5x5C)@ z6rZ?=lxVE=e@X1OVH-2hs7e3(h+RmxI?y&|fn zyJv7$zdf+0XZLPcYa9!_mGk0P=%(OTQ7W4u8=kna`2HWeWpFTU`D4q|FL2yzS)w6@Dz4B<&@Wemt2R&#TES z^Q}q9qf|Dy)09lAd^(wQSK~QuCm?ZReYdeWtG<$Z0n40CIGI4po%)XHS31O%S#YuW&bXdl z3hT%VxKiDWa-gbIdXsWTCdVN)r3Q*%)q##D(}ld-5%g?hx7|ZcNmsFzgNbzmD}itt zn0>X0n~M)B2{h_X+riSt3!y+tD~41#b={PRt#z4UVMg_Jg-t=x#Fox+VRYbz4w?>F z4Qk731a(S(J348eX6yD=ni#j)h17`sIBiPpW7`!|MWxa14n5*$FMMN~kc!scOb)(ULJ1TCFRK{+1JXhRBK3w0_^gkXR-b=$%LgVS@m?MpH_ zl|SrtZP<9uM$J(Accy6q=0Bz(9KATaen82Zh>k!kC5=*eq8%o{Z~^uRWgWr!4w(O7 z05p;?kOD&`Vp#Gp_Km^NH*4;jmcX37a@))fk?s`(({3`Sju_?UJwo>+DobZ5*& zr8AyTGH9KYiq!z3;#7Crdr6juGj(E)@0Fn!1C~C}R;4&7?wE%M2+nRj%VV1(WA5t1y$=z2hlH zCz9j}$vk;tl2htO^~Cz^UXIF?Q0W)Z_}^o&r$6)t_*t_vtOL z+nOuPD1V{is>d9MhMfv$L>3?SP9-e0gU#+HJsX@AEGt^GPNlTKnFSmBCfL}yZ8zbA zh1Fc-NVxcwfl-d7)@OXGg_7^AdE*2x1L;aRNV0FirEILLS`738%UU38GH@?NC(rAhE0ux z&42wn2eF@|pdEJ83(MPUa6=|eK3pTnUPVL5aH+*W4q9eUM=;Edz#INMaMZb zi57%pw91P8hn_l2-h5=TFx++CeCy#l5d;PGSr|?CQb0%C-Oao#wM&>IV{$J7_$%R1 zQShs11);h`_1ctWNpU6K$qGe0ll4qzAvH8t-|-~U19d!)}`&EmS zXs))uT}tR4w4%gU)e`my%F`;?4zKJ$|X1;LvCDms&1PQM>hJ+WDh-r3`|0vq15(+s#fJ`amE+;w9nqjDdGm+d$AaJUL zRzb)dkrbg7R0}Qf1)GI1nBIkjNFfZ;jpky=MVTu_6|XcQjUYu_&rEZv3BF&Gp5yDS znRqGQYPeU!5PfwiCf6gC>naMF_{vgzEH-PX7-~0{n%(!n^V}>?duyqAtXVa~8*lN> zSbP?~9Tf$`Ye~Hxv-VVkvrx6KDCRJXLNLkhfGmcajM_e=LW6bO4*L~uH!62X$`!5z zW2xSDI0)>>amRvf8su*ES&hxe{$SB|7VJGb?a};%gJ2;VImFgn=NujSPdx0gRqMLt zqOJ4)Bf77Lmr9RFUT=h6SodF!oisL;15{ z9y(!@VbqeIok9jmj%exmoQ!Y~D@EO2_YCARjM$`unQUIRl&mSXvxbKZIGlda1M1P_ z&JC@Dj~avgzGq-BQMM#exi#>g!<^WQxexKv%-r2s}e!9T4q4&v)} z=@@d)0d%yuS0M32ggvS77C$20_FF6Ch)@Pkhx|nK8~RakgL@2Yf}ZDyl7+^| zh-xx6v2yX5O2oBWmK{p81YU8T_#r&VkMgeTB%0)o>xe{8oKTs(sxXH-4(IzqW|u%0 z_LJGs-Db2yHp2(7(u$Mmf$hP50~sBsoQ9O9cc2xy;@Z9ihLrBb6^M>d|IYpnYv)e9 zY!6>{JAn+(-9Z>6P#5uI3_?KQ%(T0o*ZK%##4bBn*Xf#_X(w)cABX6iG5GBewY4JF z-g;W=sg~*TZW>4R-*b_>5Qn!fw7{l>4Ymp&wO~_{sG|`?A&r3dspb)5I4l+=6A_mZ zZIo!b!BqbtF^Fa&hWx1#foLY0ow!6AfjcK!B+AG@tffR4C6=Jv6V%B-8^;rItj6&q z9H-$p9mg6RYjLc@(Sc*V+6X*DZIrYV`7RvYIL^dz7LE-#SmtaTn{b?i<6In1maFF> zcfNYYQ5&(D`w;aspa#{&>Y3^i^(=L%N~p`!esut?U(P2$?@OvH@Jp#7l~zhQYFLe^ zQQXO>G1NY&vT9txuAwGS^N_koxyr-UlT`sVCe@Ydu$saXH>;xhUub1Ol~7_vJzG6T zU4>SjTlc$K&8j28?;53l*Q)2K=U0Cz>zw6PL7YDzWDE*sMH=q}l z-%F7z2EQ9o|3>w)qgN%Cs+X%*s8_02shiZz>K65CbyU4Z-KuU=x2rqUo$9sfF7-Nf zw|c#LgStoEtKO*YQ*TmlR&P;nRc}*oSMN~wt9Pn*sduaQsQ0S(srRb~)Pw3F^#S!k z^&$1Jno}QEA5kAwA5$M!pHQDvpHiPzpHZJxpHrV#Ur=9EUs7LIUr}FGUsGRK-%!5# zruvrpw)&3xuKJ$(zWRasq56^fvHFSnsrs2JtDmc1s9&mIsb8z#sNbsJso$$Vs6VPd zsXwc~sK2Vesd@Ex^$+z=^@w^D<8I(EaacG)IKnt0IHEX6ucXsv9B~{iI9hR#Cfkk{ z6YWU1;Mj`eDWLpi&NAdqT*WqliRE&>P0m;0{5WR??p%O_^n8lM(jBj21ID@w!9@He zfk|CQ))qh4V~Z_Wwl^>&V7sZ!{56{qnSIJ$PZMJxn5d}==Z8fTW>2XBB^B&Gwa{Wu z_bd54Vmm1aD*P8(gEUaT6Y8|nkaU5^#ojYud)R3eg5VJCVL(fnAaE8I%8K0*o30FE z1RzSh?qc(JD&2)J-c+vG?2TrI3teLw4@MhaXEcj9T8vJ0dAQiP4=W&htAx#KGE@kULC=#7YEuDQQ*4UErN}Jr zH5cO847JQ!-cD>HENmXGz7!;i{1kG^jBID2$w%t+h2U8bTYivRdL~UY2^{}p?BG3=1hqBJSxYjhb zOi84oqPfySEBCS$_6TkLzSgUbSn47$=>L=vv!~vCjj7dp9#WXzMgKpxY}SW7Kr@0e zLTzf=7DE?%K~YZ))}74N>863YDDtSddz;S}Re!YqbF>Sbkh! z692CKi4w-(VPu0iiHb&rw9=Yhtn_m}MoU`5mmouUL1E~H9_s1fBQNMt&<0hSwe@PC zn6?)7EKv0qH2D9A@_md|_3;hQcs1JQVEcYd>+=Hog7%xkhXR--e2yRnCGi$x_wUsi zrFXi=gQfJ>RN3NyxBzO@0BWTrJ1~H2JvE33gJGKPW{JECVodk-U9`KWw{H((J?-8~ z1ZekQ$8z{RNwGD;_Lqv)u&Rc7i6`JCC7upitF%csR<=}+58A2`2A+Yk?uww z9x$qqeh4>2z4K#Qq~zV&7GJdKMBAkP#6v95lN0{3yqiH=DF#JqJF94!{u0IyaJ3}) zZHC`;IOTF<*%4c4nV!IdYvyU+oLtm!KyCIWh(V7IQe&CereU9Mh3~U1v>drOjxhY6 z)TwD3jhRs^Oskk^)wE>eh@gyFT-hK=t7iSGEsRD4@JVevxB(d*e#xj{uv^Xi*C5AF zFc1Ph7KS}FlpDsCh`urtDnt>iX%=YNIwEyK$PKG7%a&jfTLs(btZ{;YFmg#}LWizX z(co^=v3GN^QiyjE3Mv8S#mjmTYXH|O1b3srt%X*ISl4_0AeO$m+PQfvOlwD6&eq(g zN+jK7jI*K1nH#raXw8$R=FKf`2QSlngh}jXo>S*9+cO~U?*RrWOH*zauVb4>Chf;6 z_x*5TdIEvyV$s6a8#I3V5!pHUVN#KH{x9%Pv@-TKIqnG&e}fc2caH(fX53tEiS5%o z)jAdXsp3fs=C!V(AEmu0U8uS&s1oqydU1Gf24I%NJVa+jtKprhErFanxGh%G?#w(L z7PN%QhXJsNV?HpG;_MiuIxrSN=}`pehUJ3*eo_!AMaF3DZkF)f&@|%{<4{1*&McGz z5!5_0k}dcU%nlocR(xD`w+SPYiCAzx8K$bHXE!}dJ|07DQb4F!>M7tWfeF#x>bQ1A zK0_=~edE?oYz~#xd91Xo6cO$zkvLj$tP(_3`Kz)0ey<5*w(hd4x^6qr(Xqsj%N~{U zopul5j=PE=?k6FVll%I6_wLx$|I~pbLsXIoTiG|5e8$Cl2m1!<0wpB&X%n zz-rtpd8N5;D(y_rOCIo$gzRhia-cN{uF)K-kxOFtetZMlq@xdCPbhbqUA&CYNOJCB zT>CjGZ8b6mzG7GbuF(z-LYm_~z0I;hZNhWfL+w`apCi^=geiV8G?)yu7q52H; z4JLbb@9G%H^& zC#}Z88YYTMRj8KXStZlMt`H0vAGJ)wpTJLsL=<4q!Msld^R^11VG9iW-1;whON@~+ z`w@8@mACWU|3GVg%WnC6tB=gfk3W;q@{$L23NgNfg(6a8;j3MbTOE8gE~enSE9gSh zd#DW+S2Vs{FsNZ?(#`wfjc4`r&YQL$@rJPH;Q59gKRlev5Bs4F=Wm43RaLI{`O#4X zQ)MPtYU8=EvZkmm%Gp_dbVu!MgC9<3+_WFs*o$JYHzK@;#m+{tfpiL?jcEK_oMgRJ z(E8cu`;onU*~|#;oU?J>ykI`^q^S*0-hopt0$QJ!o#8+u0dWD`^nevpfv?m%^X6^DFybrQ~yxHlS z!$qsJvl!~`?w&W#C|YNnF>hWzZ=UX(1MYgfW5U|o=bH)nghW0j@y+$_i*Tc8b#={~ z&n|}74esgCU*Vo1<@1hj7Jc*5dGm?BnNEbBy8WVg^Rz_h(!TzAvsARYyHTew+EKK& zZY_q^?d$N(Jf7R-n;U%-(t+BH!Flw6<+4hBlnJiLkL4I{VB`zS)zodeE<4 z-vsXR%^kkk@0)vl^J3o|^3CnOnMznGM})p_W_&X@Z=UI!(;Zj2pF%I)PZK^v_$=Xb zgwGScK=>lzON1{IzC!pa;cJAi6TU(43Ew1qi|}p2cL?7le2?&b!Vd^PB>af*W5Q1e zKPCK(P$v8wAoFv(;2nZ@3cgnGF2UCc-Yxig!8ZurBY3ai8wKwZe3Rgt1>Yk0R>8Lk zzFqJgg7*u)Q}A7a?-qQI;Clt%C-{EB2LvA!d`R#Ef*%z8kl@3DbAlfh{D|O31wSVE zaluarep2vLf}a-rjNoSlKPUKk!7m7YQSeKGUl#m|;8z8|Cir#1ZwUH=-xU0o;I{?8 zBlumx?+Jci@CSlF6#S9kj|G1s_*22336=$aF8B+m=x|9eh+^@&gQPt z$#ZpbwN7SrazrQB=;T_RJWnUj*U1ZX@|$*nrMO((az?U0u`uV-&g&)%G#y*WL5 zb9(mX^z6;)*_+d|H>YQBPS4()p1nCedvkjB=Jf2%>Dimpvp1(_Z%)tNoSwZoJ$rL{ z_U82L&FR^j)3Z0HXKzl=-khGjIX!!G4|NpFSAkcTuO`eAju5DYmaipn<5Ye=f!nI` z3kj5(lRCDWuC3(q#(iGKF-RLb^;LU8ayOQ%IL7q{|f2WeVx?_XrfyWeVvsg>;!h zx=bNmrjRaENS7(3%M{XO3hDAM2)`u!ituZ~ZwS97{EqN@!XF5KB>ai+XTo0yeX;GTngjp z#o?WX;;=5l%EVpfn2F$|HiYM%Qn27^GOvb_Lar^vwX<+7#%qkHSIsjK9MP!Kzc zVIy?VxWRD$3bO`7rJ4%iQd0rWDsLFpP&&Hc$im*W39*CsNiKHKbnlxg;#xCYW2fy8 z&BSJ!5kS>cilNNaNSjgOYFNnPPe$k!99vgkVjS8AcUT3&DLe)z9R5&8 zl+l5*A>$R=*Q$>1T6$-hql2{hIcp!_pU1&yGl!{wbGJW0!JR~c-X+T zr;Lzc?95EnHoU>-X!sDeYN)XteCI+>KV4+}U@Gt>Rg4vNyXxLae5#f!@PifXkR^u4 z!(T_ypMprV5S>kQbJc(QV6Wpx9tG`(rH_bTLt*#6YhZ8S2E}(E^_{5yj)LVk2LV$# z2`HD(wIs?07E*ZDBnX;c4||jRrl!De2E?UMwW+X;s&OzFfpPfOGK^HLye7VM>&M?7 z(9KC0QF46JE7UAAYz02GhOd5t50}^0MPLWUG)7I;nQ|KXQnP_|@>8I-@Te5>d7FQ# zq-JIaUV0pVY@q6?&~^;`eGV9$@W&7;AM?O%eg#U&m|Ss83zd;w&`PzojNf5|g5$$z z`D{We)D2ie-$5gMFLrAYJ5|+!(%_Co(Zi#e^r(HsF@3qBuJO7ll9+ahWv?4fF%ZKm ze?`NX)VEOYpRYFTUI(Tjo4rOqykCiqPVxht;zbQ4%Lv&TeaZ*CRgzCvS3!;BD=$`^ z6tCN*TOx?WE^g9aN7A)gSviX}Bg0ym8U5E)+&sA>d161=FuyaTt=Ihp0C_&d{U!5D zYQ`r2IFRx#gxorGr#VWDH0~9st-WIkIX{FbVSnY$UHb+GlY1`SJ-BQ8rGtG**twtH zx6eO*ch6#%PhMDXAjxmi2X-a;2K*J5?CRgK_mTk_PdZod&gW^v`bq4#j&V!GDx{iN z(lC*THwhR~l~NNeq-l6k@Iey7-6T!o&xE=7LpuzB3-dOtLi&pQ)gdoaJT?v;zCpx) z;LmSlzz@Bx=!|h3?co+| z?rWL9OXsN_xzvtuo%)eW{Rnw#NG>%brXw)alkK!NNlSPte8!HzLJ$x~r zcP(L%FiSX0cpf23cp>2$!u5n_6G{Z`SKZ$eeoOcrp^rEI%3Pee8c>gM#{t*irgsI@ zome;=weaBsKZ%V3+b@hQYkdMfiEWOx$CiX~RsKZPG05wzv-n+IFOHV~n|{{gx1;;= z_+|e6cVo@}{%8Jsp7>a^9yVL?r)uCW2(_E-m@~r^YU?;zLTurrOlC7_MuXtv@Ebse z{t~*0@P`TDUPoxY{53c2-gJ3aoY1+EL@Pxz*j$nhakq&?+{*hckTH3;pvB7)TuHRx z&uQka)Ykh%tBzHMKLE@((JUS*e&&nsE#TUAhuO|4gc(AJ>&0O%3h<-&(_ql%E&w9Q)4gb&UUcoCJ+ XidxIfQ^4L=hT5NFoZ9vr5#2EYI$K}w=zS(e9=xDsp$-gp&xLu(bwdRWRzluS|&qv*gxbdv-) zm;rkld`tjWrRYkkw#tsHQnk74VRK8WQmGtr%xNp9RQ`g=Ex9cJ1)uW!dIli)&_=2N zHQhZuUw41q-}~#Ijg8q?|Lacui}RZHKibI8MEg@bVM*6Crqwj2*K}sEjAzsg)TWnd zn>AC{(9U|n!^TJImfi>uq+Ky~ECG;%?Jj^@C708_r7CZ^&lghgY4p+w{F1XC)9$H{{ON z8uuL*H2qaa^2VAIb~~Lw$ZDK<{BUuPj(V=5$N2>*0P^q&+V+FR$B&-6BIKfC#ABav zw=0_~U2h@icH&~azR5+1rRsISmY2Itk8x2Ke6<-$vC{yZ)USy06FhhDgf&CcBxon~ zX|)5i4QaG8$s5ZslVw*keSP~@Prs{ub)#==|5Tc-EZXIMrf2jrEoKVx_8}Gl+ShlAF?mq#MXE}n{d7mid720+mTmpET6Zh>Pisx>YJJWB6+z?jl*{ydH&PP9_9tsQ*y=s%`os>Nvz)p+U>x1IwDy1css1xwu9en#}#hdVXky%o!buh zH-YsfWQdw=SFCqCj$Cu4<8#hJM+VL^cldV40|;l;WFEm~SNOBeT;TIh95$58&~XLF zs!d;VMq@+i`pgx~3A(b=mC5kv@(A{#>#L=K?NIU@520^Y=#z(zA3#6Q;{%`a=h3PALDy#veH}q0)~iuf^&MWpfhKAMU61*fB^_R~O$rDr z&qZ$~Qw^{lnJ)Th*!kM?#irmci?VA{c4i}ss&F>Cd0{5J;CMl>9y(rgojb1cZ2nR8 z@ad+jxW4mXaq+3s@S5D0)u{7j&{d=->XAMTgEn^pxyA)%TNbX^p)adk;)FRAZKK=9 zK#T^NtyZA# z)mV!zE{YxcidhvyI6FaC448PUDc8UN_etBYa2E{A!^P-4SV9p;aYZtUkWM(i8eM$A z1-~4^j3n&Cn#@r%?E}sJs9)QAN-T0d&MICP7T)4axD&>?lvPCzUA25_EO<&9wELNOpyB$m;MXmJc<>N>7=lA9p9zL0iiw_kG-4h}ZB2VzS zH8*sn6rx0LHh5deknoI+AdJn9i0- zeA#WR#|5>aI^S;8@*RjdEa6V27@I_Z+SsxiHXHRs_%JciD9)vOOg5tP1VLk^6I-d2 z=E82H!9xrPA=m$?iK|%u$HhCdOV1$0;nKphg$K9i7ME^!I(M9)6<)J&-gpzd-4a)FByHa zpOq%;uG!1BvUseO)O*;tLUtckz;a=^w5OzONxdq`Pzs4}LL#BMiG_nk10un?O4AcP zaE7ThSX_e+@HikLHcrU7vleV6YEBUj%6wnGm0RVHNUt&kN>4We1|=6dsV1*Fq(T}{ zETldpEdu#uTyoKd2_Xj!vLw`a04b##c6g(?0>3UbSTtSput9)Ss_l_db_Rl_rh&o* zbfS!9$i0>0?A-z>vQ*3yD~4Ud8j{|fbsnxbi=xZPom3`ptLb^9ywyphDu5OM667u> z{UWhvu&m7$=f3NO7@q<`d{Y>xk7OzwM`}M)1CxqAArKX3;L6Tiry`uIszi^4U>tpiVxx(hX(+LP*Rr^*9g`oOzTSW zzruS+tg4kt`Xyc5K&wHogCDWf4Ky9qQ8Q!|hVz*VKVNitM8kTM(`3NjfS#7ep3ICdrb<~p5bgUkjk zNuoi7N<9w}7m|4;(!DQfo-<|IzY)eg3=-21E=vC_*ua3-rU^*cE&z-hYx2gT?uv=Nd3hKQs-ZVR08^@ zDCL0C-P?<|58Z05mO9y@6Mw`)I1KWA%pRT_p1?EwwbtD=*lkA8R1t=!L$IsY%`a%3K5`i^rkl{GyWBqCxv=wcp|&N$-U%~cGw z_t>O36fa63D%kVLQ7TWUeApEKEgh+e0%xj=KSN#1y1W^5Yp0eQY-M$=weGe3&V~r( z=GOL3v_}Ee@Qi67J<604VVRCh-*uR8oG`rmD}AGfT<$dTn(M$yA|CKfX&~>X%S_Ac zWp?$CL6ePf%KU*EW_0m$?sMTH%i}r%))ZmzRv_3l72YRVoT?bZN_#}##JkMKS%9u` zWH1}ZlLo$vm>|y8H-jzsJ_tqCJh9bS3y?3Vn6ZVJdsVK9Kfx5S-i-Al9%EL}s(+8~ zA%(M=VXDB$LOrF6XXx3ZeGd-;(TPSUqYmO68cc^zp*%*ZOM{Ms4undz&C5r(8A5k* zzrhb|^+(+qu0b27H9~rjO~)aBHQm>xCiRxVup*^ApCk3Qfg{H%8SxxIdlY%k;f0VE z6wAjsIuL|X@TZLGPsIX#*cUnClOGE=S3!sEj=1&DcvU`(sYU0IiAs>H-t;JP4R2M4 zfLMY=6X_|+7C%N4+Gifsp}FJ2aF%}>%&i%M9P_>3XGYGHymU$}s5AWsG$^P{63XOz zujcyts)ls(H$!$4{fs&bq=k18R`yJm2iwh~4-B`f_p?2tWiq>$P1DK_Ac}wx)GxDY zEK>e-j-((q==&JHJBMZ%-=6^iDlP+BNLBCZr!_Wtn!LxU=!1F1b4b?HIVSw<=m7{c8m!L&WX4UQpCbXt@17I0BaptGFa$)~X4m6ZH+2>t}W@;F`gN z$_Hh8NH~BakY-Rlh}rrXq_S@ym2Kf#;MYiG=Ws1RiR{AQ(n020`C%g4W_CaKO)j}2 z=w-wWKo)U%VfD;2+REUX0v8w*{i!R7Jgy|t^?UhNF^Oje>ti%76liRkY%ej}EA)Sdy{oL9 zsq%bDUm5v8`qOvOp@j(Tp@T^(cWBC9;zepLotP3z5|%s3<3XrMtZ-Mx^zSkdEUSe( z*&ecYu<3Lgmv0rwy7w*Io7oI4T|=u#$&+0JzLy2I%|#z0E9xW{AILT#BSz_NLHy^Y z)_+8U;_fs}!B;DpID>TP{y9eP;LqVqmw?^F)j2x8`AaZKr{LLg( vxka`!q-2hnK^#E|?L<&g!AQgdrIG*KE|-6jFPcb=+eO>7r)iDboTQ@e3hTTa|mu4C69C$?+Hv6EC*>b7yz*iFYuYsX1i*>37O zZJMe{)bHkFAWC##gqKwyaE)Ceo=OU)ow7UZTEtrGnurxze_3erYNw+)9_W+tmnHJA%<* z?A26hXLVOF9&8CFf~~=1uq~Jhwg)?cox!eP_oZ>t?S5Y>*c06KYAU#^y60B5bQkw` z2lsG)kKNzR{k_54xPM#qZrYnP} zTe;HPd2)ZSpC|jPZzn~O`vbv2?hjUr+~3Fjp$e(q<22e^O0?)P*5VDJ$4583?z?jH^w;rDcnO;vXrR_nWIA%^{e~;EcK1^v&UY3u~`k9FQ0$$ z{Ie(L=3hR!x_YY8s=WMMW1+JA@_cn=_2tzY3zsY5%2C(Vwfd!(FIFpSt=i(+vc}=j zmoGLSeVLA4sn%OFt2a7B6*cUBRY$#(%CvWu?a|EfW#!sDF=MxHpaspi1>p!8PNze) zGaS=tT&FFZ4sk>UPG>rW)v&Qz4O=%lc|K;4r!(#&CABZ(T$Dj4?`pVDdE_~zGLyNf zY`9JDaEu=}E;LKoMs=l>U#%>!G@6vLR==^(2&$#2uyU=uRV~}fq@}xnXdt7v-77tZtF3l7VUSC;0 zwp^)SIyhZyU9Pl>A%nbf5qRO&0jqsBTIBDLQj@S!5BysmCO^Mc4ITao#23P91&FV; zX6*C11)-7thB$OR%hCK8r}a#;5TtHq)-yrsN+I0aN-d?=Q|swlnVY%#;dR=}(?_GH z8GE`TP7|be^Zjnn;bv|5mUC`BN55p|zL7pQJ=Dq4_Res*T(4d$mpl2@ zkkPx|$q6MpBRX4Km2hFX8Qw+3LiKof52sGHadF83WEU#;@Zu~-a~G#nI+x953h9Y- zAv2NLkJjKzR*ZH-J z%~q`iQE1fLPo4ydfP$m7dXZj%02>*r6h%fV3*ch0S_e~r?qYMbx=>rJRRbt@QB-(l zZaNq4rTp;iIu&)gPbX_h_he`xJBp3PVx_pkm&I1)(!pl&pxR64 z9PHBfnNfrFmo8W9#iwH;T09RGg^`ig;nhkdyuy=%$1SrcmnZkZhaTT|;@}Y#Y1#N< zRfhqk6|dEnm&NkLW2ha;nq1hBG#%D0sE72IjgDD<4PIje0&y}RPDY4BAHh+w;cgNc zq;)15VeL|_URf?%Wq5b5-rcX$+jQz1F@fhYha`&BZ9A`YOBL)DJWe} zw^CQ~Au`~3u7%9)&sy(R95@xhiZwAN2?!Uf0y%>{08U)Cc&YXZLlJ{xw`)bveD`iL zs^qAS7Y0VX>oanlt4ul)U+y~l;rVA@m@A(@`^@Z_(`RPO&&*J`gkxo^Q`d$e_9wYm?{%PRfkzE<^m>w(LymE|Xl zoPpj?I=pn4J)fio4lOrS?{%PM`>sQWKDPeE^vlh|G5Sv*`pCzg_~_wjj|-yX8j4vB z#k)AIrTD&Bdo?6P9#G9K7_mjRywiLxM>3D(*^-@3|y_VXY61|xFy4W5_I&>+7 z4kHyFvHS2KKMqOMlFo2gy%yG5)$ilRiJ9s4Lz}|#jKjjN#2bB0FG@)W4=c-TbM2z* z+pJLmSL6eFFO;MzRtLjOb;9>@Gz*+kX%T_(3}j%_E{(~+=pZuC$%NI;u=I)AagfW9|C99g6i2g2wTRC&v-JSaIKBBW*QSbd{fJY-OOHC`yYv|EP6Y3@QFs^EPX#CK`rTYV9n9MG<6NH#p0Vrq zaD6&>pIsm0`q|)&T|W`LKllKnay)o0c%I)UgSp@Zeoq8vgYV+^y}`NQJikwwPV!P` z>@4d0GfsPK|4qSj1tmV>tgqBAq3|}4FOUzBwjev)nK;>Oz;+RuS~2fkB_Bu4O;xEi zkDyzi_^*Yb_`g(7`xrq}gxTb+Vk@jxBcW`rqSCN2oHxbmNO8V$=}58KT9_$TP+PDn z6v5Yp8sb!<;;{;2sw2R!Ur`;S&3}#y=xwTuBElJ2Do9@$3Gam#L%oozpvs(6 zOYZrj_B=znA)fPYDQoGUv*)7PIqnN#=qMRE+11ie{2OHNN}=rJ_}Zl%3172sQF|mO zP3J1-sNDG~y@Qyk&ZyOEts0#iR+p`lBrCP9RS}I^ml1O!vznqio&Rap^W(HE^CQ4^oX9o0Y|>2EGP z3aEemoBUyHz)B(r=uG-viJIFPPmVxz7xh>ob^N3d$zDw)TFJ_xWS#KcIz6t_F`eF{ zQ?IVk8A3Q)Tm1rWO#Mr3&u8*5&I0FAe=ek_V5EB9-j+mY`*uMxi2izHEs4E`!-a5M zI!BAU77|voC>&nTIQa`%OOhCd21yz1pF3^T%f{9z9g(GFttq8ML+&0sDhGUQM33gN z;>pNB(}nJ6)ct2l1Jodr|je#YpvC_R{uC{aVg5KLw<$qo$Q#G zkTYuk+IJ<#rYF6Zhc#9@W|hHlPK--h;}TA2lr%Qs-L34+Og-1i-OSkAA>KX=RmNWX zS*6J=<=3(LYs4z+$c4`}8dpRk>AuKZRY`O%^+*g{EdbhO%LtEnbgC^D>mYu!SqX1! z98cdf$BMJ)tDs~u_sDc|NcL(l{%EZ^eWY0NS!-w(*Xj$#)8a%{9aW=sL(IzCWQv#7 z5Y@L*@5v!)=~gRy4B?cgFJicHg+M!&$*dM#@188e>ZKY0 zKrg9b`kR&%XH6+5n=(BdKFQ!ZFdQz|nq|z6p%`u`(K8eXlnUpbdEc4y^XEQ{LBeRp zNuGAn3w@v$p5XC6g@YIGEU!!DO`VAtNe5$M1v5AZ!YhOojQ&!~gx7;xrE@o0lR zAPk;;09_1GGsxb8D)277BFe8>a6A|?UyEECw^{gZOHh?xrcK_Zk*PR!i=D$G; zeeGe(SNBNqn#@iC1~)UkamRr4K*YBg!DevVMTxYHA==7|rH~Mswro_V(l;#^7S?LX5*xj~)tdMrSvMg>b6HT+XfZ zC(x|d&HU_aO&VE7^g-Tc?Jf7FkBSGRjPfJ7tz!iFY&tEgB zkU^nHLHyQ9Bhh!rcWx#xE(>d_Wc2p_zWgha&s|qZ%y+`Z40`-nS@ka0n0>iEJ!H~C zXY_-z&^{A}jj%IhZ#u&WM6yN$b9H9GS~1HSPj*siw6PqNBSEIHa>M1?Peq;wkc7ca z8P`(7qv;GZ#4)b|^km%rwf7`@J6VXQBZ5IsM_%SeFvbjG42gR@B=RJJGU*Ljq>-$k z?D~NgQa?sGsLl6e?g9!=r9L`|43dVgp=SvCuAX|yL13=EbndngXxfvrFZ9UwcLpEl z`^OKntJC_rgBVMCj#5dDi<1ln3P{1TbQ3JWNW>Yg!}sg-0ZyF)f-qB6)gXLA&z|EH z4RUFOp>(4f&ME28!sXg>&^NA~{9655SXtepp`9XGYDki!23JG6Cq0>NPbCM{CHEvn zoCrP3u?2yK1~M|wF)1uT5n>>PEZzRSmndi>B>*%LE0VNeqZFYk0+&koI&eCX8q{tq zEUc~Gh$+izqlp@b0uu4+h^jE}C$8d&3Fz)n*1p!<&BYs@SZMao(A`lFva%M$vwVD8 z#O5uG>Ew|Gs`VgZ(WS9O)jFA#YX4~D^!~8M!nADAY&6}TO!mJL_}=d!=>J{R`=&@A zlDG%y-mCG@Od`bEkPMVcKdDj5-m=-or5vbZ=maZ2r>pg>UD;EaPc6S?Dey}}yjgZ> z${--hx6YX|$~xD|71tHgd_h+n!_oDy?#nvk(!#S})Rmq^*Os=-w>k1CbEmW>px#4Y zoPrsIYY2fTiZImLYM&5+#2xF82~&qgAz}#$V(GM1ZznMp1KxQQ1TH+GJ=jcw!%U30 zm|PZ%#<*u-F3m`_Arl}9#^1QsbV6v6sd?uFcE##7;(pcJT|3u2`-*Z$qp@Iq&6Q#<`oUu2@Qf(+t} z3~mb7p*%!W?lWDvGXf*zw#bH5YgdC~#b+^5KmpLdFCg_Hc1CiaSvP=(1llIYzj&kC zdar*F&VQSiZ<5(m323+3dJ$;$0xZB+=sP8h8OE>*nQb6X9cYB|?G8 ze>UvWB|2F|31bp5f0*cPPiZVshwy`Z(np-a7kTy(!GR_vQ^OPKQIv=&Ci^E)DcbiY znmf}EgMQ2fTAuy?A(=%+hRGHxgNc|hdq%Lxy@{MH=t~QKhwy}5G5(D6i5B93Oc0vd zUe2uy-OQ#_dN1}LejE`2QDAAyVK#z(Z z-9m7^nQo0yqhWISHF99e6-cxztJPOXx5Y{x!f=_sk_x{9oy|Wk0VcCFaVsUk7kbEd zH&L>K+?7oD2Sy>c+Vjc!A(fM>vn)EZBSH z{lT-T&!?|`5d6^>dh#EwkE{=ux7VS@_IudXnA(o6k5aR3)M8td_RChYZM0xiY1fCB zwnuM%YkgREw1f6q?=yzD9ZC`ysT^eB&zMAt=8Y>#5>;Zxyo{>st*Hsv!s)<~s%kxf zqNW;NYlK%ErkiE)TIGhUh!>R83_KbE_(I~72=iirz;T8;6X|k)h@XOKhz4ENE*1wo z{=#LT`z8p?#Pxz4+%OdFYuZfFe-;%kHyrzFQ>kXIy*CmnT?>kHvT4=o5CQYSm|=d* zDJ9W+d+HLv%Fq&KLJ4Rn4O^a$y)opBVRlSo9;c^wgeF#+%|U(-ovnQn1*I(hZ{dgZ zDyQ!3N}HW5hPF<2xmxcGU#>LebC=*qVja$SX@tl~lE{ehr0FqdKr6(ZjR#WPA68bW zpveeHt)(r|6o*qzN)z7R>R#{mA2uE7i5R+7rL@C|!A-oLu!=`E*V3rnxiRbGcoL)Q zqI8$JNR41mwSVGn%=D&5vy)tnL-i)pJF>yd$ZEUkWa<&^yFX?FrC2P-eG7o|t*W!i>i-n@#8kPkekf2TWvI^;0M%&M}tA&m+rc{k%i${g>3EYDj9Rx2;Y_ z_H9)=eOomje|^`wl)MRNdT=_$)x4wMn4MQ{+hRIG{1Sdl6Gc1*XnVj1vpH1Ttj{s! zNpxF}sW=^~HIYc)$?3E30#Ewdj9bffoIrwKQM1MKQbP#yJ%j8Lc->gGJtZ9GXx2Hs z#vq%YgGBjVv1Z)LM83AUS7B?X5bbZJaScodLt>!UAGOXpauZ<*feEG;o<(39GUkNs z7q8hgED`p|?3vS+k40UXZ~x2ZD_0^1t;lQ0sjr8ca*l-V#Pd=`5D+UHR$t6@u@v14 zLnoO`Oqf8Ls7yS$T*0APkBvk-#Khzq`BU~3+7-WJA1`0LpBD2YyOFS0LJj8&atjNc zQE!t0Z9n`>q@+W@WC@CE)oc_Gy4kmb)Z1RIR4a8<$r(RpF=9TVF>{Cspn{UGA;x?V zf#7N;kSdd&Gw{G`H#75KCw;xg!Z*&x-xxVA-@EIp;fddj5ha&uA4ya?Vq#@ir>@_H z|FC@kUU1Mn#9?-l9%IGEeIn&X93j>kMD;t_O8tg2H$Td)nH8LYLD-EPDMKXA)bysK zrelercL_y3Vuw15m{3GYCUuVjYMfuAh=3lSxQPR0$VVyh%xeP$inWy`)6%;>29w;v)KCXT%Tx zCipTQLA}-CpQ&kg^L=V0fi4ITF~aOf_u5&LA1@fA;al+%9g@Hzx`bg)rXW+37V(<+ zoSEe+=OJOzk&P-Z)pf5U>g!5%H};pmUbf^=lphN=L94C15mfCLCr=+)0O>^m3oMwKJCZ@cRX?wi0JjevtD0Y41(|O9Zu6NRI?G za8d$yJGoeEQo_y5Wh#3pz<^UU-T~zu*qgpRVewCl-u`jHb8HG4jsRn$n3B>oFbvp?VZ+Z_5=p@c8kpr3lSmafHE$zP|=Y<5=n}0mBEebTm)Y&y0XQ-kU zcrL9mdEn+|2KgiW(OR9;&gXF8!a4$=F}3)&)g>G_VD+xK(974+5kLCIIwEq(`IG1g zIb-(tRrvWN#T0bw`+qgj8#@xiI}<1HlB-rNdRGJEzVd7~Hv@%nz}0|RR(zI+;l9HG zOSa_q4CrJNG}-7Zz82}TZNUo*XM)Uno@hLK*Yj58tbU(lz4dEuz4gp%G^sw>O26j( zDR^&<)N+|Z_|^3sZt7pPMG0SB&+Ga%)=*w4G%xe36d!j{+17{Tzjw31;)db%f<-V= z336lSi=|wUzm+CFYbN{>cZ8IYYx>KXPiH=zs^i%G>C9{CPZNvtQ|V7BvTOSOitjN!YL`TOZ?Y zWNBo5%$9QH!mr_aPITDy+?Ap5YiCnezb5B#hI6F-_!*ogaQ%Quu_z6{4GSh$AvUR> znUygTh_7EtPOQ(cDAp~EpZ91cJd`g>Q-_>D*Ha(efqOn8ovn9g8B>pU9pum)b6z4T z*K!|X`J?me>j#x8FMnU4q5^U(Yiu7%0=NSbc`HaFi%*E&Osl0TcaKk_|;OF1dQlZAT~+q;>*zQ@+rjX)j@3>ikPDJgiL`e|I*G#*i+orp^0fDV&6Kx=6tWiKTkdT!Cfe{Jxl#Xs8e~Z z(xTx#cJ3Z0gg4#zTZ0>qAiQzf6z3pYK%eLtFF~0nO+tJ<_69pf%MHL7@sxobbBz~! zea8%M+A*#gOW>L1m86iM zZsuxl>nT}aK=eL*j(Qej#^&EfqdPA~K!k4M)6hhbx=ke9(QeA;)H0FxRKC3{QH7aq zlAdM_RDeW+MPfH8&Zg?$r>MFTgg)p?96*76s@GSj$k!%??hrvg@7%O}t{+a6o$Tr=I5tR4hnG@O8g0DbyAd{aH~$IbW%|F(pX|( zOI!O|=;9NMML{}VB3erLUHWKeGHi!ioQO5Fv^|k<)&faX>*2hL`hZSa)K}VWYwylg zS*zD%vA#?8oL+0eRCrOR%Q{`r=_;pELB@tTfHjmOW68{G&_H|gik@tZ6GS5!%6X+U z+4rJbYiEy;c9$je2lN( zH9j^z9v!2Ftos+LCJP+?54=b{#nHULsVAsvM#24wU!G-W6c@{USeh^aHVf#yBm5Ft z)pjZwWp5G%QzfWsb=-wgB@F16>XxMm_AMc>s_ty>jqTxwDW?6?7@Kdf)}UT|(d}#y z2LdzO6@J5DuUmBBj95`fAQMqvdR-CDk6%2&ddy2}wfgY~E}l@lPFo0PIrw;MeNOZi zKmyJfS7u+7!#+1LWzm1q7|%Hxb`qULZsqi=1+9*^|t8 zm7_4XS+`w*Ta(>ar-YDU4PUFT))uZP0%Wn$zi2_GKlxnSDh>rZiXtN@Xo{Ea6YXQA z$f;*epM3GT`SJ@d&Yyj8-Ukn#A+{z)M1^rQU+#Lfs%0UH?dUUFRkTtIu5Z(y^x7Js zH!O%L>pi1=2DAv!O7!oF*BV+0q(~kUKd6#54uYEcho%JA_l zheZ{Xz=eT^!wj^qox~}9)mEo$7%3CBtX~w1GhQ2O>oKhZea~RURkAn~zK?pw;3^^( zUlIhc#V|x;0NzBAJiC*rOuXde<+ur_WDf|{vs`{vy`UDQa_v1ETkjGLZoZ6vuKA1B zcnN2_dA3QB6Xh}+$FO)@%V^5w&-j9IiQHIlGO5Qt)zf3&{oI&kAVb(p%;Sn`p-CM@ z-C4}+VnjZH8IV!QwZCLbd7RnH!eTayHyUehfe%1Wzw%$^GVeg?$D(JC982TtdKhyK zDCverU(af%o5P|y%Zi0d5k4E5<50prv4utorK4fCuZoH_1SZXdpQKss`ws4J9*j4; zh?c`@>5kjfM=#knp)+wZe&iyNjWDRhU7>SLfQfYtjZ+^2Aj4}glx4a6S&soIJXu>rFGqbJk(gb zr1&tz_6-}a5PB*RJ2XKJqs3fwHml??fXZ-lI=*=;f|EiyW@$_(fA-unr_X$-lRy9B zX?~*x=;29q<`x5ajW+~y-aasPEoBr@{9y`?k@0S>`;gI^^CjP$EMNR( zeX(_*8uC|^!H??nKAptL!v934AJXYZbox(qGX2C#o7C+f0aa4jqRQRHg#=VYO3sC? zg*<}e*l1yV)FihBUi&}Nr=H?y?&CC2baaAa8Ua$&cepb}|Hc>_IGWl1hZ{Rktc@5X zw!crr*y=)U7OFZ7SMP^xR)kz&U=kB<1#TiVt%5J&Rs#KVK}-3{Z3jdSS5Po)I^Xet zCIu}xz=qdq#Vf<#EFLm)?CxAnZuhC#Qzz$77B4aZK*$@HNR-|xVFgZaDl4kDX4PAs zTaAP}Hc&^lj_}kmQ~6ua0j+j-xaexkaykf$A&ny+3VTxtOit5kI|Xl1(~rihS3M#L z=T>d&DT!sxHDak=e~Tz;PZ^la)qS>8Q(x$3i`gEvBMvX&1xYv5Jz05*?pQ0UY6$TI zrDAI!MGdv~DvTjYvD>|Sz;~s>{ zB};IzoA%o`BF{{c zfyibt{4EHD(Stz*;IC5PPpUwf@Ctb&0GUuv7dZT%_|j7xO%a<7D4#`J%z@}bogout z+7l;{X*|Zvv`3EL_Yr&Wk#_c@A3d?zPC~w`65V2zw!ZliMgMBtH=0+p|zo& z-BJDb5LkJ$CRp6;+clw`dG8&z;m4`(ZzS5FG0?L+YQylXBed<^C!?*kAWyU_CdV`F z-0}O49&Klj9=*fr{xz!hn|;;QvpcHp*r_T67P~T&vF+_tSyjb!9X~VZxbv?43F`IR zeRb8dp1Shh;h~sG5)Z4`Y}mgDl}ES;;~iqVEVl#Q@Ttav^uYGLC%db`^q{BmJuY)9 zW|pHl$7wyiN_08}F$yxo=X)406XF5xN)et#@k;gVG|#wp&+0o8Pl<2po}wnQVJ<`> zU-=r;Br?1r*222G0o*sQpfSE^cQsFBSJ&7SW1)EFlnI~B8_iaAg}aT+O@deya!#6e z#Hr%7ao8=w2C_%wz`zzQwpFF}?j2F*;`274c&)Xeh?oR?>n092VF(H_pP_2K7 z;$ovobHg9h%P#X0Gw4hr+qpd~VaOzEejS)~p)$89e4GNk$)3ZGR(s+K_J7hIJ$~P* z7iQ-_eD;}QJ9A=m4v(n4Dyw&&$j+F_Rgss}PbtCe8I{`fzo@gn;_v@eCpyM*w;m3nBFiDV%K7% zQ@dniUNad51Qq17wOaFXQM<32^wP}$ONcEN#k|jcps}6G4e%S;O!KE{SPo6rG-0%h zbf_n9s1x$%aq?bmwZHFMcR=+zQz0!2-P*K|`ugGhl~*e6#cjKzn~SN$Z%t49b=tk9 zrziB}4fVu?c21U$#(A@tw14JX^+;^aHuoNhF%p;o6cHJN28g9)*aZQ_RAzO2C>c8@ zj<*3FsZdcVF~~p^S8#ADUTmPN^pV*&4nTi{PTJMeZFeN)H5AY{Ry6=|P|3IW}%fmmc!>+Q3bl#~oq*=qx#WS8$H|M{L8n^R`9aeDESq z9<4rFdMtj*^C7n5dx_`os=llAZjQ$phYtlGwsCk5*QMYic6}^Z4qgsEO3EjKmEdDR znXBW$g`h%q!=?PrrJmNw;;Qa{`8yWnB{2R z3p+AfdLbkdvf|6v33sVj_C%U@ZnlS7NPreKnxzwLBgdK_BGzk5IYRi2r<$jkXXIku7BfL!|4O>7K%YACH9)}K|p6=Rd0ME&kI8X}(jVsSoP zV~ZV2iR}rV8_|!M;tSwoc&*m#dEJx0=>UWBZ#LG#g{s2KxHjmjNCfSRFl!9+u^S_w z+A5Hk?JEjtPXR8U1b^cU+HRt}C}t{CkW?6z2s14rB36+Hezw?-EdHRvgA#z#vLhiQQ(98dbRA}xyjCd9@GU^s`Y(0*MV}Ym;w2*6$&jWFB=Hbtf zub-r(q?AN%lNr{zaMS;i7$hsm{{`-yVrw$(Z4>QF!*78Dp1LINJ{Dv?JC0ZX-e8D0 zU4mI2yDYBd4>v5GhgQON?hufOh=PLP$9lMHT2WZEM68JzksN0X44dO4* zRZIYOueXI?;8FNXI`!g%gNajmu_+{I2+Zr&!LK>|X$mmja%_kI9{9l8y#HCH?FAj1 zF8s8hM)v#^qd#F56WP?YWMvn3tW%8O;C3D$JNR56u*7sKVpgUx#kD3?GnIFvke!hq z3q#UkWMG3v=#NHbHa&cA@lfRoQEi!1U0yt@s3Wpw@M5}aYC36Oo8+wt?sy0^(siSk zgECH-0QnY2R9-^tz`?WQ2VxP_pr6jrw_L*#0)0O@s|YH03RAod&fNE)}O#rGbPG$%yKjzxS=x$yrmr4 zIMup>SwgH$jK%y5r(V#0m@BnQmlYP88DT7b^7N9|9HIUD{S~4T`es6-Fxy>}@{9j% zuo;dRB^_J}3A@!b&x|7*W>@{xs==Ep*n}@b_+*A!bYz)mkP|j=SIbD2pb`SRgD#_k zUBx-JA`Q+Va(-sdPsx2fMU$gzIOyVXJ!1ZM^@fKtzqgHTORL7)cBHgO2p82lZhE(( z+7zbC0c?HZoCDdR1jw4Zq8u#Dp)2lstci)^iu)wWX@5ekt( z;9c5J19d!eg{>H`Vv4YRHF9jDmu)er1h>&T&a=G9+RD^;F<`dxuR+KUYfYc_hx6u1 zXe&ESD^{@c>!?ClQD~a1+F-UH#9WQHLtRNAV5}19eL^uwW*oaSxICb<(V8;z73xA7 zRNL0m_4*)aR>)*_wlxFesAL~Td$$gJUWzB>=3z8Mo-93i1LBeYhjB8m{24|VGl=UM zdqNbScGCmmC;AJjixt8(duL*%evIjCsFCbg&STNpo4Q0y*~G-#6Bi!_p12J$hV$>O3T#V2#-x=%f$k%j^!-k2HxmaVX;WSK*cjo)DJ-5*) zv0}+@=GQAWb;iq#e6*{0^D6@QDLzb1jAwRacHlIrpd{mRo}3ikCKLCt9I+k4hrL{| zxD|YmI~q7Ks=Z7Mp2u0->YUTUjlgW&qZ`}vNFM+b(UW}zURUqoaf%P1gQ;#e&dX=* zD^%__NZ*;XZshy8L;k5cM7W;H6O79^Z+{Y-8uU}mN}K1H?IXM!B;#~2XRrD~e2upT~9<9Jo zqgRS*F8&;C34$;WAwP_EL3jw z#J>|J2+Q-Swlv2X1i`X-pdYsw0?iWyJBSm5YQI8K0@df18nwE^yPPv0jCfEBMWQTW zU%8z^ocaIe@o(Eiq`bwid|TB{Rm@QC1Q%W50ge1fw9fuLcHF$)J6!agSIY7^TeE%?JMb3*D+;@OnoVz~|<%O?*IL^Ejst<{}xu zSQ_wm^~?-ke}SvmFHyk^I_Y{pc#Zf%vdn)t7mG0$*1;{|C%VBF!MIQ7B~<~NXuJX2 z$GAX>J9d5E@jJJlI6SY20Z8T_YEP4z7^)Fa_-(dP?!SZLJJ%uwBUcYP`Fo z>gxM_Ac)_33CIv&bU6f52Tp}xK$Fmj_pH`Q2~e~k5)=2KY2UNRA5Os^APo9;8ksi% zi>#f=-@iLFjb+2MiDaG*Io#&T(R!_E&l;jtTk&N4P~rd?>tGW<22 zG|cfp+JjH&X)lGcao?>$l=&z%oJ;SJo~;I;ndJ_q(+hYtFu5mM@&-D%XmyvMoaH z5WvShjjo|Xs0C7Ku1;bXJgr@1O8?k zT009H;X|z(M^_OrWbTPPxR)D^D^BG^Y*`D<(Or95s|e>@4j>ZE$4o;qE4ZDMoQY?J zHF}HMD_wfC+lpJM_2*R5>G94;)wWngF6x6CLI7E_oxWV{PWZJ{9EtvTfOe6 zy^|UlQ0F+;B5;~ae`d`{4Bg3B8aS1~1e)yo42o*&Y9UBGe{VS+ia=#rh8}~9Bz5+M z5>|$hjnc+b{~D%9yNuo(D_R79KgpUTRtWqTY%jsM0&K0T9Q4ZIYHy zxgeM%m@8{W^#!+5uslpDMRB&X%%JLe%hRF|=D2wR3 zED|5Uh zs72DK^{(v@#>lSGymvi&Dc&SKT{(crG*63cQ%)E@`5l^e5<_K~eUx!WjXDf(EX2eF zTt*;6mEG*DZ<^!_BPVoC7dE2gQR9vkaR8*>p-PeU&W%Dh6W$0LY_6-`?LdK<@k8Dx zz%o%zSIjnJ1H)@2$jB1;B8q0lTQ%8wjtb$4WM}llq*CmwZoZ6>M!;$CC9Y!F{S7_+ zA9Nbz9Xdsib|-3W-nQaFd4w&6nfYq(iiWZ;p(jBaaW5)sCWTRDY_G_xI^8~giWPFM zFB3iDtg?t|-7a;sgVA=7D|7vO5%~p%v36~Z=|#Uz0ZGtm*6`4maT7A}3(u#4rfx-L zLKLOEg#>@g)>;zL!WQL90OZ@t)ddf4I9rZ3G;|F~~%(=7(@ce!<)SD`o`0XjeG zLAZ;$-vZ|uj;;kr0=L$LUUO3g*B7d*@;al5<#{c^!S}mbx;ly;3f};v1N4 zZxsaCvR3zQaF1%}W8Py!|KJ%5yeB3dMcyMHC{QvQpXoU)pqyUyP|SPZ4!|na19(n#1KXnUu@lov7uQ z>qdKi8!$;=kIYMwVq*jdRB9grjB9Q1Jv?%(U{ zo{16Jzw;0gkAWmwn_Y>yv)hQW{X?^n3h2CDGH-Wnr7XF)2eF-;$T60R79-b zU7|6AcA63mQ5DPOzu^1EJ9D@feo)mhByyal)S2$6Re2=2Y zqM$Ej12%C+#~S)=F9Va z)6fvPc62h>Ttx0(M^J~a|J!eV^PAHnMu$5E_OowFe&`eknW0TCabnU{`18s(r@p{b zszuNoOJ>W=&CeVAT~yHcdOtW;MoM`oaW`-WFIaKuDrh-QWb z5=b*$*b*F>e?nE#ncuwABDKJ&%wT|&ji|+mo@!#=0wlo>D3~jH#_`f9F6O^bM{H~o zZhz%p@DwW>S6z?4F|ptp97PsY?W=g zPf}iI=ql0t*G4T1e-4X-_0%UbHB2CTb&VKlj4|MV}Hd4SLS|2mbg9CaPt{|`1a`~6l z$1>_+`pXoCV)rih(z8qnP+o|V?oV&xUSJ?{^!EMO8`MjZ`M>1*O(ZFJc{H}=@;~sS z?|Hd@H@IT*Tk9WkRWcgzFW$bQmlPlpKmK@1$aC}C4Bd?5C?`j=^%1b&vNnqE0*Cy6ORH}=rD>y@ z+bQ11AL~MWWXF)`wjHX? zZyLu6%pxLM=wGaKh4vVLSEVa(s!tAXfS7}d0Z0Et1|o&zvv_Z| zwn=@|oo`bHMii?Ezf#zxRJ!lH!@f<7MZ*IqC^kq!cvcQE&%h+>2x_7uyH=$x;Yp7-LK99(f*OmG6B*ITYO25!}WXZ8T%H5pNqVtKF7`hBND*L zJ&SX-?L0tw?PHK7q>Ls!*gar(>LxoCuxP4alOKl=lyWymImS4|Hn)WTNatXPl)9e? zf0u8!uKshSt4{=ZEuLDMl<3YL1*xTNVy`#1@OFY@tD81vwO~ugYQQxic`>yz^Cfb?wRZaeAb*SFoAs(+PUnq9(SWyPU-1i_E5 zxBF6FU7w;YtiwbB;R?mVYPY_9eRMGwDJClWKexBE-=$mMW>NW)!RTG7VB|{H z?v$5Htu%kIK4xc?HR1CZvnwe|BcIQN_mW2E)Y1<7^*IDN>dorUn^WskkkqltSCb?) zW|VI!Jx#*=8zzomwn*!pP)E2zBn!VG@+b?8mY0`fxyngd=qghG96L~TdV)`iU^uii zdac-HB#v{4NLM!yW*IYKce-64vxs9=F3}eu;8NN5wHcdWx%T zDAb29@*&iigU~EJjzegsaI@B$`V(20`3dNFPX_Vo>ewnSG4%NJ&&;2+m71#oqwnt0d|tf7j0H)!duXR&B4IfdeMi}`EInkRmLFJMXR8yj&DO|ZG{av| zhVS7ty~CN={x#Q~Z5v?Og$ew(N^Pp3j9Hy5HPb|sr1l_e=#kcGG^?G0{h*%Of+*Ha zB`d*}AeDt`DduVJZfk_?v3O<5X806&mL|Jherc&hasq$_G{-aH`Z|SH3}fA0p2A= zvTE?&uGi1=4Y#}kG{oc2K6LYK%?7I{4n?9-@i-Zr5XSl#2}1T5^_tVsr<#4}N-3IO z`9)IC##a+U8!l{=ddR9$!7sRkU8Y)^1y&U6y094EUj_>f0g*>A7Y#zHzY z8Q%xY2Hh_GW2IqU=R4V_bQcdnxMKqQVGrcM7CpQ9%?F`1`z~YU%hnP`9+C-BL2I#!>!l@P;df`;m22pXNaB5;pB$|F{ zT~7F4ys6ku!=%pJHdv?D^Q_&IC_7x=YfmMZUPO3g7qO*bO>9eSGa(3TZiDdcwyv*m z1)-Kb$3BCL*JsuX*nU5QXo?4p?L>B9Jr{n-o_SDN>hu=A+G)>p@8Rrfx35m%9+j7Y`4hWNSvPWK5 zPwB&Fbdt1D+78+1U&Gwbq-6WMDUx%TFH$=t3OHv^SiCWDB5P%)ELOUnOmWDOtG!)} ztPi3*brS#bvjsBUiFd`cgeY=3IJ?avVSD-t73f;;5>e6zTAL&{53~*Z>A~@C^E}6l zX9)kKz`0AE9Pd^T?rt@^d>oD^cyb1sp#+?E*`7JG_S$1QTOHN%V)u^;9?dp08ADoM z3R`{W!A(g$xa|_SWt!~YXfC3#MFgmfM5gI%`_L&rJs9tw03j$oef|*$&B`if&WknX zn{P0^o(T=ZRC8IhX;$ouq(`MOW63wpN^dBQIri|J?38^7)|qft%Gt;{enG{a~To~p89z|U#|2MjIz6aSA#4<}S9841LUviE_AETI zKG0u8K9_4NH3IKwBTB!isM|?V^>27HXBWtfhF1_etGE0nxIH+0fW*RI8E~8CBhQu1Um|HMy4EoG%x1Tc zt8_)Xi@tIEWlF=hJJFTGRyKPv)HNLc4&ugO?+1Pp#El$&41vCbysbFeQ4%>#rKe=i0eP|pv!#ee+9B&EQ<)3ZP&E{ZNo819FliK*|!|leoj}t zQidV@Kk?jmN~SElf-UBW&{bpHGxrScIU~#lhtMl$obPAk4vK$QKdBu+x|1r%z{Smj z`wBVaYif+SPY#?g4#(+$f1`$}#_XE?uNN1{Qu zh~>j7NS@lJ+!>ZCGrdDhzhmneEmGEEt(%#v_eXv=+4gH2IwCH15c86ut-H$QqV_jv zT|R1(21{(+Uc7hMOVK*;T-Tq~_RO<|aFK76c13Qi&S8t~3^RX$6x8a65P{p!Fx@l3 zMQT`PPiZ^mJ#L;+-Id}^Z?B7W9h1=Z@g4nVx(<7@Gd;+*<>))?WQ?RHpJ}&0Q6i1GeA-BKQO{u3|2)_RK%v2V*lFG6E(G(zH$YOF|Gb$^Nv~hPoI79;adUPsc$L z2@|lk>Qo%L*C@-lNG

#Lo`d7RtsZ;FFoqY`@|=HWh5_mRKJ7K=sD8Mi>z3>C&3p z_rqqHx>Gvbsxs$m2=Itl$&!dgw%XM588<*(zt66zKfyNfOngHtWFK4v!h!Rl7Sp{< z-b&TVRle}nK)5+403S}*@S}tu$i)!2-8krRw+X(TB`~%jNqkM>+P6J^MAH7rMX=0N zKQG`%VJ2Z?k(95H6J4EPW3T@SpqiH$#A^WXq$V;zSMbR_`Z=c2es0$7LWOx)^D)1u*;0v4@Vy;dDlzcoZ&1lRppTOZs@wmBRc&q{Sp42 zPX8k(q)`I}%{cYL!*G=UOX_+-SBeeIQHafNF?u&WVcRbI({!5c#9@B?5C0F#f6ncM z#*@t4-1Jj}HfRWcS*35|gcVL3VTQBX2+Qq>wpH)`j^1slHO~`eaMlGGw4hw!q~6-< z=P&8%E1WvRj!QJdzo$q4EvM2j>j~$z8sF54U(@rytJ7C?dcRJ#O7S1*O5I{h>B8UC z)j!thf70oH*6Fu&`fZ(lN2hb z04h53s*I80s7^9Mlt!e!82FoxQySL&$>n9YJF|PicG5y+)2PC2x-!bMQ&+om+O1Pp zW3nr)EwptOZ^ydYluo4`qaY#nZ}eqmw5P_|uXudtM1Jb-0y`4t(vw(1wA1nEw!(qJ zmeKs^1OZC3g(1%R&DmAHFjd%A7#f}AeheUC2TI)9cWEEW@rA;~=vKQr%Cjj}k&jI6 z#(z0Iz7un4mR*l0M|Y0ySAJHv1*+x$vRpmI(VXYhyT?Y2$#Wvlv!sRC@a%IO?lK^T} zrw=hOWbPeOoEJ6|7Cwi79bNHF4hS7WJY?riJs{$M6bQ+~i?wBmdRp2I{OM_zKSUN_ zaRG}^m(UFXW%s2b#3hzGo99(h>;f(-E4ImQqBKpgR5nDFvt`op)`f0gUf-%WXGoTl z6;M9c$=h#vdbG2fS#IqsS!-8W3@PKf^Cok~Uz?2mdEl{=Q|Q}Her=VoAJxb=O{))^ z&Z|q@V59xXc*H8NN|U}6=bzRLjZY3x*NlZNi&5_5!_+w5Qd-)RDP-}nGOsFaWu)n$ zUFj*ju-ZG5z~*B$d+9}JrDK+(`BR)Q?x3Q9PVy@Tx$H_aVclo)MShScDb%`072?!V z%KhV6Tij<8v|5;j!j}nu6i?mE_4ls}k)B2!#I#}0KgU-@Xw)LF&|Oo4u7!9)CtNx} zRIarckbCz8q(e(EbnY-RVSx~x1AxI zfOP!voZj67vc^+%>hjVEJ5RfD?xbHy@HS&>O8tErfO|=r%I;-mJD0`Q!y&5=5s`C5 z`D1s~_WiwG@5i_Yl6u5jRn;rv z8yaomDBM$%_R#+3`u?WYl^6(Wm4^#6Fi>7M>Eb87Y^G@7nf8vJ9HWIN-n6xa9A)0J zw%kvLG5Lr|fM7lmbcJ`Xn|X`HqH3oR@P>mpg8~+s@Ra3$!TJzKj+mYb6|Tj7f=HeK zDc`VX)+@l-H`a&3SM*-)0@@U$9>NIBX^9QXEX24c=pTfj7jBNM4>!NkQXEmi>6g|s za%F=256gTBv`&(06fgEMwh0-9;QutRCog(@^6#9#mBT>XJTa`Z`G=wqb zt`bnZCLc#n2;;dLYAQjltGS=8=|C&PK3MwPr)yP2|iysPvlR;CQa}pSe14o zaAFCXkuR)a8}TN1T*MKa=weE(sCvvfBtQ7DqvlrW5^mrq(f6A$mPFe(&CplT8`>0o zD{`|=w%#gdUFfH|`nv83pO<74+c%=ea z*vp{1rJtr77)pMU#M4_$Wh#v&`mr>g?3D2Pl&6JTT3apMqZP;Hcu9I$Fr%Fw%GwCQ z2xMt%;$teZYGL0yTdYdu$|dBHW~-CF67Hi$&e=NRc3tsaBv2+pV}<1YGolMm^N?N3 zjQ(+Eb?~^^6sA~Xio39}qdoRkEJ_jNa5?YXwMbIDe@GOgJ#B}yX;dNkJ;r$?<1qJ4c83~YMzL%nA$fXx^ARmW z!z+MHn07}wXo2h=nguja0`kOO;+&*vy zSybD^U#YWYiMB0)v^XLa!CkRZ+}{M`KgzH==545B9NhHocj`qizX}ftb>1Op9eE~> z{Nk3iyRev{c#mO8_$htf3&)|d{1*aXlch#-S^1yqADSra91nLq*hkZQ0a5fzT-OEw zs8>>C-tau{Rmuf8J-=)2G0q2IhG*HXLSVVWcw810LQnUKqhH+%2Vp8LAUK|+8+-Gd z-q=JNYn+g{E|}ysVp12u^9*Ps3lpH0w{iq{p9I{7ZBmX<3UDky+$6Rytb-hU&1$d! zKyLgLUu0S_H=4k?;Fnrsik^M_Ipcy)*`m+!@Hf_n;Ex41-(Z!MoORc;C|JLH^~+bk z3`d)Y9CNj)2kWrp`Wt@P+Vo{N=|^uAaxN-FEIja|~;vMBLv0}`E zMz}9O&X;+G0Oak*aWkH@Un_HIGRW`d?Go$9)Nf2o7C^G$xzqE`mD21h21Ii7n%?Gs zOv^~Zf2Whc$gLz1r8WzJlkz3-Ij|HkcCrLMF|$o4Qwus-HuN?)vIQBji=~0xe!eSh zsVqx2yHRE>H)}b*S2-xe&q`@)gFVVw$hr(AmWuC0ehISuV5*vJMF(zvke4Z_83?UbVce1J#3c1YWfx_Ox@Wf*@ z<~}o%EU2aNpY#*`HPtyu+stBWG5sly0158Z5o|M9XJniC6e>+g(YNwuA#cwooeLm? zOKU9hh~lgmoxM`6uA+An+w+Q>f9}=Mjhb0PXC2W3))_DC3S8pDf-SUAPgvTSK@>$5 zV)mQ6GU6{4%CwgUn*}2J+O!RfL|=PIjVGs3GB>xPWHHrEkmH12%^s`-F<-;;#5%&1 zN2Uv{kZ5*0d_dqWxWtnBMSdLg**N{O`dff!BcNE_Q+_gJqbiw+`eBw|8`&0Q%bCG_ zq{VlitFY#QzQ0^uUR_)x*kMez#SqLEAp;YvY-I(Df!kQMc73M(!@l5n^Fj>6Ora!< z3J3Q$S(E(X#@a#HpbsWzJ7_tXHSVj(mZrE??2}Q6qRYi5wwdCon#J(7MT(9iI(CXo zUN>9WN|nXQs<4=+r8qSGURk1ln5N&x(rpIZy6;e!N_(eo7afZ6SW+!<*M>Jox*ws8 z&K4r9$X3BNZ$$Cw5{!5ZZTK`hbWx?rrY=069C0&7GfXuR>uy0igp|vjtqTog0GrTn z&QvZggjp3a#*|YdY}ss6CtYQ8<8G?(A*IN&^RhGZZx;;x9?wc!xX$RBS^iwr5K?oN?Uqv&2S$cR#69(nTZBFb+u2Y zcjz?DsgrjKN%*LqnC$RZb!CS8AJf&3>-5)j`U#zWQm3ER>96bbH*_-LLv+ds>gvjF zFLzO-Q#ej^)wSj72|<7Jm>j7LUi>{l7qsni^mR literal 0 HcmV?d00001 diff --git a/venv/Lib/site-packages/bs4/__pycache__/formatter.cpython-36.pyc b/venv/Lib/site-packages/bs4/__pycache__/formatter.cpython-36.pyc new file mode 100644 index 0000000000000000000000000000000000000000..c1cea6bd40d147482165dcedf226bdd64b913725 GIT binary patch literal 5421 zcmd5=%WmAr6(!m1R;#Vo_=R}jB(+E)cOthZGxkOhjM4Zp6U3GcO9mzl0dzI1x;%nxAe*K za%bZo4}$C09p|4;?P;R^DPH||6v7d1?2O&M>pGu1!V`_Rj%Y~ttkL(-_C*uzru5Km zpxqJ+XfN2dFEQ@0Dcaw-vT@iFi(5|7(-D`x@oe8x(YH{JTBaV+Dcc>i{c;_@SciJ@ zIn+aQJo-Cq+nL^(E2$rIMQ7LPu6zqkoUWT)ew3KV%yy>(jo+z>(j=r$^T$W!Q@nZ~ zg>?F^aQdEb`;Exy`*@qe>$gOszaZOkk(3`{E7AJK?RNykdF%8qi7VogSbFRBm&CGI zLC>;S6>Ioi5g&-l_+1r?5b0{R_&8N#Zj4m?Z`k*t)S-$7QZvr9QQXMUjG2^8wPeQ! zGGXb|Os2*}$p|ART%q46i6YHs;6ty|VR&}av7`}3vvkPJL8Pk{L;5yU=@|Wzi3oN` zLb3xJNOLS@VpsTL@9C4f8;pMLZ7`k)_T|$jJ@yQ96a-X~g=sR3MpMNHv19{Ciq#;h zDnuxC%STVR)>5!2VO-mxuZDBwUtzh?bSx7?nvA76NQJgS#L(344+ z3i+zXo~GKciA*Qh{5Vb44ND}}#{$qcIbwVOYuQE0eRQtJxIXhBecwD1aLhuUz=k>( z1%juE%&jS5_=%E24xDPcD(ith5eG2Zu5=oB6q+ z+nSt-G_)4cn^p(*OSXPs#_>A4oqvZdfMTX=FlJDcjbtKK6kb@Q$8JGbK8;QH+;aDR za6Nnel{CF_`Z!9WF}DIeOX3-Wb^szEP!~zVxS`aVD+}b&lq;SXDf3kbsMYhD$C0pd zmz(U6#c#8RIHQyfThUPhi|6(3u|2}~aTLcuFdK3Whq|!CU}_-Yj47#M1VUkyP8&B+>$*GLM%IA-Swl;cEtBDchmRg_zkIT{+x0VF%XpZzW1awSBbhaE z&{O;z9fM~6;;1!j_1(({oBJ<8R(gN;9tyBf7<>g45MvNqw%|Q??z3BT25Jg5 zx~vvr?C1M+oo-nRKiWgd)jeWDYhS!l^0B z87p)V6zhUO_KE}8gNNID+xgyLZGn?S_&aw#8}yZg->9#$mD9_8YyBM9l2=ASGl<;4f+1C0bAOcMkjp_S9f)fOnE1u8j5BgL?gc$6OSfPF26 zJVt|9RCi9@T_o|si`}vJ2hVj3k%)KZsjXA@&=(D`uB{z5F}`)wqL$@c@bDRGU7z1@ z9p|}9DVj5zo>DGjPo45E004PvBBUCnEH}`(5Lp@1#%c=1uywf(Wpz{F961%8J@PIv zDpKL8k<79;gZJ{vL5zH3ijYmtQjlDti8|f^vXW-1lIl}V5usERl~|2B3W5PI&RD|7 zQq$JLK8FzZA*0$KAU zlvxV|jRiBGcFb|ousF%QMBCvP!O^Lx=eJl*Q#^AP2^vd?GaYxu^SzU+6*X7GOPCN` z=(RARC`B+Sj@mu>>-?OF9SZXg&mGGDlkK~%`Uy1XdbxM4pnlO}t0kHOt4@G1+Pcib z$&JcT7e>9`S~tmB?jhXisI68;NJxxJ?L+U(v999rD?q~wnJRb#AP$OJhs(_Itl2?& zEMMajMcQMe!xfB5Jk>))Z6*Z;ohlm-;>iKWeJ!HsI4e?M%E2oq8OQ^m5x4~*YH&4B z2B)((JobEsOIE@@wq#l~KFO~~Hs+|;^@WiBAa{b3*DCL226GF=sK192XtyJ6wX42c(4ZX zYEakT*-=+8W~aMGG)i6-1X(8t#;HKiM13g;zMAs5__7cLxN8MLe+6+hc>L_e)9t;z zM=y5!s}zB19hpxZY7uK^zKBAj$Z^yqDlEt2pD2!nqWUS-entiPX00G{R}B>O>JL%0 zKlDA{&0o)5YqZ;r+qu!c+FrEdLJTP%nnaJLr7!+#;%jFJg2(@6+WqF&NF)0#qU}68 zE+KyMSa#a$3+dJiiN@%ei@1uq71I#Poih(fI5N;75dUZI)cX^K2BhOYa{ei< zXX;lNudE%qsC5e>^O1Ms%x(Uy!IFysG9uiogCIAJwF=ETrtPd>x+m93qEg1*JRpKl ze@>gweL<~LaSKJ(e(~t@-|X(a_?_K7TP}78WgO3Up`!p-+@U#kM-tZHwV0wL-FK}5 z|FO-NfTU9LlbhdXa*X=-Ev`05kB_Oa_9j>s_Exv4w?`d?x)z0{mH&t@b1Pq~i~0jb zRv*#2cDG+st;F_x8@mIE^n2QYJhiYY^@OVQvKB`q}hbrO4eVg^&%y=o>~c6K`m=<%|;BK91$y@tsZDFkq5A;0X MU)R%ripWs=A5`` z8=q6Bzuz}=XYVc+3qr2jKMHc^KIU=neDlpW-)p{mZ2R`1BfmABFMKu<`Q1q9FM;!? z@$-KLiHhVRDw>O`n2MKUxtJW|xwsq?xr7{(xg?H>a&kG9OGP6%PnHLk)48;qr^#cQzwVE>;+HSYeGz?*%#21`={ek- zF8D6ax1V6#2Bxda%WkEP zGSx~QC%wvCs21xhHMh0z%7^BjdvPW|d*$NvrDrZp=P%Aozj)!&%(JttVLocM?iETE z*K4I_i(aW#Zw<}XSIVxxC|$T?@e z;JJ>E7pyD}We!|;aeD4US1vlY{PHQ^Hss8HXm;-6WoKlmzFfu&kE^;G$!=Wy%1h^8 zxHP@4xGKGo%^Y}b`og)nbFZCqkJX)dw_L3(zW#bP zGr@{O8L95SvDj{d{W>8PoI<%=z1i8M@02Re?90ze6H+hK&^afwjG@Uo^}=HK0k}W! z$z7+uRH!?CqgJc>t}3rMO9kI4>JjtDGZQG*QE~3zT=+UucW>1J+d}r0Qhf<;QUDU^ zJ4c-5f_J@9bIPUbE}kck^PNIPIhli*gN5bVX~$oxHp&V>aO!BI?kWdQFPG}BS14yD zf|feoAma-LWJ?Rqp^}qvrbeAN0TFnrz0p8NHQt~ZgHs0_ z0fnj-0SsjO82h^K5ZV~|`RZcBca#f~;uQqlvx3frpgM88vjJinX14ES4iqZDWb_|k zsJk7lNF#?-)mf=F0AkLcv!ffg=~k3^r9O?CiB>=tKuFHKx#X4>m+GgcCTq7&-^NS! zfIa~wT@=7thy=uhRKp2q!q~*bE61iMCN9ifa9-sEb|%M-vh zgsy5LNQ^2~FwiAVh+?7P3!$(O!nE=VARw3>eyvb+O%LjY`LdhM)Rj}I-t-DJ{qFPC zYS{%u+ZeP|UiTZ&V0FAyzZ!yUn>_@9JlNe`}`vg zmK(GTz~Oa;LSo*jGO(_mHMnvXcv44SYap%j&it8?siPxK;mpY72rnk}#gyDSCh75! z?BzzeUIH3wV40;6@9!6u-8MvpW^@e#h$SFvp(p?jfK5WW9^x6f<&+d(3AMXgs9^C0 zRg^-fLS?7gWo5qW@eDsg?zEQ_nRcLq0Z2j5s(>7*7h%}ODgY3a8h@mS;W?I(F}TsF z)`4xcEJ{g}HIfLTZV}B?7ZyN6P?N)DiURU_>=m*F4cfVyytH3ZcTf`z^TI&x_AARO zu2&WPYCR!}_5yegIhFvHOO@+@ur$kazO$yMn)r#Sp3JY{f}NE*mWJ91UH}mch>73uYF^2Av%yI~CBp4wK$y>GGt-#M7oT}H zd%1KAs8R&0Mz~@YW7;iL!2|(+sty*~#$H^*-t8?Snc6LTM&xW%?6NtDg$7)Yw6$cEo5lJgDz;y8I4KmP&}Hv;}Yl8d=< z1v%iOk+mrJ{taZh7G|5pOpNetCBK)YX^3^7>b2 zUz&aH+|1n7bG6z9@EccOs1^(5t8?yh?P_g>W41hoW#f8b5tIJve89;2c$ec>=l$bX z$^0X0ytd-)4BjP%lSr(2U+~7`U2jvA0w}|m=I{OZ`6EbHW7i-rtwydz*J8Kh_1J23 zA&TpGDY_c_RP5WM=wGaPSeSGyH&(82_#~$~e+~Syv*1;i4HG%u8VCRoZ^diXn!KaO zM~)`ET{yH-nlQ9d*KZcQMc>$e*nD;K)m5 zGDs6M9GHC2cD`QC2knl=rRUyWyq#Ar9%IuxmFxV(%s*kx>{4Vbsg4Y&obT!Q!=b-O`+w+7;LwNk;9 zbrl#E|1~_rA4U>MBmnh9bO-)5_jkd)qslWD^#)Nbji10i1%w3bE&4Xk6O#KpaxoT?i#s+qDY;9?1-eQu?&w^Vyp58#aeX^gPobrO zwWvzn7r8y~PW0Bl;q`^cJJHvFUP{DQ6DoN<_D<9rLJr?;5a*xAd3rU8xBDeLVZb~A zcg*vXBpt~81M}|5Rk>qx<=x+j`kzOB8nu~k-@fuLulOE#Hr{I=rXR{?QFbsW8_uF+THc8D|10x&&1d+3 z6t2O$K^>e&ZllDFZ`aggH94!P$d2Z?p(2_zKuuD?VosvsRU3;?q)-$RX;LpbGtFb? zA@g-l?xC3%FI+t3ym@=*$_FpLEGHLVh1zZI;ssuGRq{*oc+TQxl_vNkAu8)C%v^kR zZrxKnu0vlW%hPOEHPoj4te#byQyslp_dNEBz3b@I zmG|6Y!}GEJ4hXF%ZX0o2`nvKTy{ni3Fe3Lz}Y3DJ5%=|y)VXe1TFC19?97e-xD3y zXQ-Ky_L;jQHXPd>9Ri!M8&@%$>APIU@RaDzSS&gmPsE$|+r?}>aLBhmGdk2t=kpYx z^7+75pa_9SSUfVb-d#w};U`ON zIyRh0C({E%>GU?qDI&-IVHu4)jh`RG#0$h2m@~wD0U;%!u$+k)lTrgB#!!ffK#2K> z7#X4vW5l=sAx4Y}7-elv*rx7Qdr)#%-J|y6xLw_=?!$41+NbWv@hH+m2 zj=R)D%E58BI-o{yyjwl29>H;sdQ=_6@gDVANgN+kPpMNlKBP{o433UEqp~<2P-oTCIF6`u z>O77Qt7&xs$4Asf^$d=Ws%O=6I384&)blt#rrxh!!10i}tY&Z=RWGVIjtO-|eE`S9 z>Vs+)$HdI`rP>Sgr`j$`Uo^&uR`RZhKzywV^QwsB zF{PA?<8ig17I8eGmQ)GHC)71{9mgkCSuNvuQdLwH$EQ?H-N5mb@|2I`X&5USIA+uv z>L!k7)Gf7wV^%fQn>e0TtLiq6Pphv7OZrypK1wVhJIU;F^}0);v@i{_OoNFJT{lG! za+uKS!=z#)FenJglS4}j5@BSUbCne>1BJ>R;o9SausAR+Ei~%jfRBW<@YGvUF;N2t zDO@gAFRCu^NPpa+Il-z!wPbgM>zn8&y($Dl8mu5_xloiqprTUsh%`e3gDNd7pePhvLjxZWNcC4PHZY-&VM% zFu_SX&G~m6$*pm<7h=T0+Z2x@>h3j4P%$l4f(3#srS=4Z#Elf#fO{}v_coVbhWTNE z8hYxcAq8**kBbDsvl9*^eb^<;`r#ZZTrW6pRG~~ZcSc1kWaSBJhcrMGhSLKPuLBr1 zyOltJLMs70a9go?uN8+~qm^`*=Uvr`E#bPfyx2;XDz!!(BDcG;h?OKkV%SPU25vx0 zz1&LB_|!^b3<}Gwln-T5(QPH6b17bL#c!g~lxVWORubavvfGNx&5Z8$2qW(VlP8!w z$%Hbf_Y{*;OinYQVCJ1+l4Ww1$d}KBuJw*Nv6smM1w__aIeVa9W~jg|GOw_q#sCv(R6GG%SiL+M)PfagJvxL zfMSa#Lhz@C3&9#_AZ|w~k6(ijR8OqM{QdRh+xkulcLs20oOh%&B=WA(&zrooq8NZS zD7JRAS7r&3;S)2>Co^Iv$<9`x5uBc#g_(?YlWbs(ll#vX-Y976nYE%dA2by+2gb&N z%41{DIO_V%ySg&KTF4?eijKCDQV3WghFgy!rS~W^Qu;C8Hr}B~3uOuQ2D)yS7UV&J zuwQeFr9wHc+sYfQCKP9Xf_gryM6kf8u<-AW$uYh&+I+JAsBPG$9Xw9{IDRsC&)_6H zcu<-^j|jOZR*yn@;2?4OCnyEgQ%0-6s{trhD33;XPlhBu)7(z=?|jyn`@Tpt8BJ>h zPt{7r>t#$e)mX0itz@}cP`<|Y!F*mU{`tHp&%F1sn$e&oNW7trK_yJ`6l=Mv8|@DW zH1z}WAgK99#axsW2rf*ce}N{o(nYv7@=(oF{Q=ppIjfbSQi?x;u7BYbZ7KlFn$FzD6A)_BI&dejrzjLnN}QwHJWbiwHmxU*BxH59q$nOB%|a}Z{<-7 z(n?{GrE=^sUPNm~Ki1l5>R7#CYg=#!2(GbUAZWL1KJ3vdn+y7>&&i^w;Gz=osrQ9a zR9Iy)(|e++WOHo8q3hL7YryF3twJY-HtiHRf(xjDsSb|qg+>u#c<5FmKR z0e&cRR+9@ctb2*(iDzASe!w|EE6b=9blSRAyK;Aq$#^rJIq<0OJnCng+lFATl4(-Q zdavRIKrDpDP){|Q(uCu6oP%lAFfl~J%j2fju1fh%IC?qc?jzVZP!Y%^X%hO04Z+oW z)Get%OA$_DL2o`UAQghba3)%dV)D~@C(%56xd5LE(J59a>6MQ-bPZThu#WQifwf2* zf!1SXD?v+0^Z0wQtx%!+VZ0s=LUKEy(%K{RLBCai*}g+C5fLI-4HMBmIMu7R1rgE6 zq7r9I03xtb)C*!JYh^SRH7;pwdWj{h?Z}grN$HUs8U1C$z;1AKBGKlN{+;NoxRrN*_zoNe`@g#8K`Bs(#G~?^FY~PJ zMQk&kbHg^inhphSOGIHXabgMIVuw0W^8{5c0QZP_?vfodBkHtmE6orddBi@r>MB`6o*U+Pl>VElgI zodrDsbqb}*xc_T-jm+6waQVWz>qGef|DADo6A8Npjro}ouzX`DN6v;FNZ$*EKz*r( zM^$|5FBtM`n0)G~r?fi*omH3I0^C8shC?`mhi|zGHzj?QnUFlVK~>dC_StGx&9At0 zV&Wy7U-*=4GbykAB~Iv}FCl>su$G!NV@jgk8t`2BVHMqAkrey0kn65R@;;fcgn3?E zFF0uh<}VUG#4bCKz)}gtYP31ofA;ohr8k0*FIbNtDh3|}@C0Nnjd?l&q)0aFuPniE z3XeBCH=S2C1EsM8T)pU!6kSvKibMnLw`!Foo&Kv*NEAvp-I7P=E$vC;x3w$&I9@VS z@h4A#DrSxTJu?y92y!N}IGbXbiLCb`sx${ok*@`HFFkgFQsNCGgl82XD&GrFmHP^ zXRlnE6H|e6Te0e~+dt&^X&Hhe08%9)UU#QmpV7+K1rB+AKd^8>!DMR%_Z} zmvTG&_{8D8!`TGL%m2%@st1D(z0I{>DXE$%&jLVjt>(2DZ5?uG=P>>_M??U-={h&r z6g0O$`#_0nUZqfP9+bR6D9f^*Y98zl+n|bt zCaZDD5ESG@3z!i>HKQ^kWN|-HPg4B_=2$cUM!fpWIR_%6Ok6-mdkMm-Ra&eBKI(wJ z@cv_gx)qZ=1o?|}#(;-9;cT?YAQ?Ah2>}`DRr?6&iUq%>FM5AKphc$}A636M3n4Xa zGm)uHWSax@$b_fm48&Oy;+%1?d8SZ?pAGqI?>Q#o{SgN)_tTdd4G)!>dFV!^`<)un z6-z5pcck6v)07=G&H+1WZP`wbi;SQw>_!-dx1B8P^r%ZWzvY~a6~P1J1oiWs4(;M* zrAH%JfkK9e!2i=iU;Hx*wHjPrzyQFL3(9tCb{+Rt(XGKDn%d&<$cHFobfI&%z>K2+6A~G3Z(U8*CtraLHkrYMtVB6hP0|{&TF{ZC}C%l_XhB`-)wqV z26EoxRMKzKIhQ!JpRmKJ=ToNtaQ1J^4k~AqT~&GuWxF%(a#ns5JYmuwEgzEJXs>iS z6~f7YPNRY8#Qzj~B6K_2JlVe&8#iKmKzd-A3DSgKe4ViV;OeNt5Sl=iAvx3hY?pFE z^U(o|JY;y}7d?fYJ08?#7tdV?Ol4w(4EScU#$>98jIG0=uJ+6bHz!S8vlb8V-ewaL z_thQ1ULw>pTZs(IJtX}25$9&rE2{&@?%=$Kns;?I>0L&TTM0f69ET<>*Kbyrz*uJw0;ZII5LVtdo?Sq{)GYm>tp7O!I!D$Au{gEHO3%z>qXw zXma7p!Q+K@=pIcliYBTRM=n%Q;!7d$yG>{jh*}TD{)p3yc+mu^U@OkF1|SpCp;ub- z-eT!4C=UyiO|idC2(thjR-!}DbPh*%#PP$i`DlM=n-(mjJM2bvv`EZXuyb%F)Osg5 z6Alq)8t!5|2CM;XT8sqV%tI+{SVkUv2$2`qJ4DsqnWq(pN=ru{dByHGg3TMw0HC5` z)%V^2KLQJR23H``+|*~c>uD-%ms4n_4KqaUMK=5`b8iyYY1;0wp>zc_B*ZL*Xz~vI z0Pd#XlbR2O9&@u=9m0=!_SwyQ+66+l$r!cKK(j$;F9-h{{Q|9Y(1fvu<%W=z*UVc^ zrX~&I=0?vMHzv4%T41>VoqT&!4djPBTcf!wc+5`|Swve2d5aU02(#P#n$Pq{mpgq( z2R_-gfIM-Sl0wW1FiT4ualr^;3H6&A6CS@*fps67xXk7&kwAbAW``0;lFN-56CZ^CIe-ZaKB1H~l~BrLWhP3(z&0mzkwRAJ&q|-d+$utj*RWCA zhM7ywb3vt>8#hkn_0`>mn1ye4&txt0nGz-Gt6kMT&#@AjPbM}eH*}_($b7w#$I9m7$O&YK)=)b~(@e?{t?0b$BnZ|YA*&xlMmIcV4TbQahka+8QXXsmLx)W{m6?zYP|`MGWB8=R81o;=o9>n z$bF7CUkmNM>#g|St51}oDCR6y6U`yip zf`_(b+o&{XGO_Gpd!^a7c%F-KsEp1uyB_UKH5Ln5X1kVCZLzTKNCEl?T>ak;%|fUR zW6=~S=Vq3R4cf8@Lu4TvFpZ!8G?F&L2qS~#lZ?!wW@!wIDGM&x6->rBH-4#sPiXZ4 zSlY8P@6h;72B^v%lVNxkeV>4o+a?wK$ihX<_q}h^FtY5jN62jov_qO(hMuat z5@21=&VGd;3#m!Z^#>T0Q=7E7HE?yKq9{lq?d;Q50)uk}XPz(;m3}K)@IFrLSqB-) z4x2&-f)3oad%_}Lt%&kB1*#i{$?5(u>D!it2H@PLdB{s}0z;m7;6m-a%c4h|RC_ac zZLRUG!RsD?;e2)eU;)uiPWz6mX;9bg)VHBKG*PJoXR0Un+_{^V&|_XF3dUa}`NCZ)bl&y>gh5!}7ciz~8kEB_HQDuFEB)M$n{X|_ zjQi|eiQ{s*ch0Sa$qjN|?Z|-w!#v|4CJA=tw`&C$r(5a$U;TyPe?nz6X!&%}0fyLD9V@l&5+?c?-zCVY?Q-ceEHjjE-J{vjSZH z4>I`1=|$N19k>J@(3b*@eeFUcHh&}xlHQBt5y3n*8xZ&obYq6>!4L(>Q-lXOKW@8j zUhI!=ci5(dMaRTHf@cJjaOwLJRGh)C6L0sn#7p zkior_y&eD!q<%-clwP=8^|1p)cI03~I^NhRz+DDg`>>Cu2!|N%0-A3i){AB^$XlK$ zm*&GegT|W&gbmGRmZq{p5NN=HPD!*F(KHA?QO?n=D6g z*3Y!Ex6PW;f^EEZ{b>tkC>Hk6wPa(T+}in{0M{E1^_C@CuMnGw#uO+VU?`?8bcMUDuv>}XmL0ym6{L5GWvrcmR?d44 zJV!#Q6}@}TZSQ$>?0EuBrfgz3YWvTr!rwH0et51Cb+o?*kjio;KtYNAK!qY1^ybMc z29>sqNM{2?59Y&CB$IlKz@d~fU~Y#t-OW^Wve9 zkXnkdwh4R3_*FVe5|sDtBDklU*YT6OhtycXIBu;0Q!-#!KaHC`2TxnlMd(CKFRb4+ z@~t4v1rFv_4x7mH+A6k*C4Hl2>~L`kS)B*{n1j`+J!8WDPt+iZvYo`4+9aa|p@vCe zE>XXv^&)Zy8&+^Ud9w8_>vm+{djBx1HNe%rqEc zKw+$fhFb^V?uh-_n(&ygj5N>`m#{CCF>0*C7wqxyjf5A1t**cokNdxg&fdAJq9gu{ zr$F+SmdhKsHJ+TfjBH6Rzc6BIAjD;_;!{}2Cy)oc(%MT+GBkb2=*OH|_HTIg?D5P* z$2H=@q2j59QW-Dexrq6xmE(eafnVhQ^#P^Yj+d>Z^rqP(>J$ofbfY;O?s)V5_00x7 zX%6a=c(+b(KXII&2egf)Dzrm|cf0x6dt{B1{k*atT^x}8W#YZqDzzJ%XyCb-9xB!G z{tR1pQx^BRQdr#I$XvgH0?X%~N{|Pmzye92&U8d;OTBMpzt-umK8d@*^xse&s;zRp zbxxw!$-^lQ3$33oW5r&z^6#H*J?yujr@YTvxTDfbDu%%M7MDO7)>|d#xNQR-|tWY{17rbj5`|wRl9+} zUZSHi1=6Nf0eswoQ9F}v;tH~HLmhy#xOEsWS_kOFSZ0`AI1XqdrqI!~VEdl-{GJME zq@ipf&qa7+T-Te(=;}C&&B+Wj?2n7iw0ZCx*1qgKqUez(=xDkVIF_=Ytk{MCHkqc& z(J0KtB*RdNmRFkFLUlNIDS+#>Nqi6Jh-&aQmxj7ap4fBk+9h)r8>eeQv3CerZ8b|{ z&ZeWy!~JJ#xUz+22Vc99a&>QpvycjgGml{|J_hWmz8en=WgJtOJ7V2sNfo4Fb_S zm)(Ur{KsI7{QfpXK&v-_$kv;;5Fweeq^jFx;5OWQz>C0rBommWvE#zdIzkIZi93Z> zbN_h(qk$rD&KZH4R_kL&5Q+ep(md4${m8oFTEk)M=*tm18paKr+taigNN5UrGB{!{ z>as?j5}fS~!cLC@Gf;HUlvDLX93&E3?yll)zyzo2Y=889PfxPr)iI|-vTES!13}+x zX*Q$tc&5(Kxh3o>;pPg+{#XU!Tsa2M6~?ic_Y853 zSe$1IXC@EMpP4$1psJWHlmwbno0aE7I_m_JtT)BcffY~-)SN=vdW#MnYN5i{7M&mA zh-pn}iV=aR$A5N{@w(HtJI08VW*R@gw@c>47T`d@^#_6`GNz09zQBNgYMaP>meAmv z!oIZ$iP?JN791TUhBJ^v*ZLySZw+b|!vi(Ppt+=7k3S!*%|W+-n8PmY9LJ_k3zI{G zAAi(lHXcTET{Qw`C|oW8p9!T&TF^>s<@@^B2a`e&!YbjwSk+dBss?Ri!WZHA6NICP zmvMS|HJ|T~QCn%cHxx$<6sZjiiJYRM>9Cy&bpn9C=v#QN3-p>Ce-kAilJZq%ZMFQ>hQ?{l~imXgsz>5ofK-oP_AZeR`zB^+Vk9gUbU zj3uc@LVH6a)}^I}$r*(D>Ln2xC8qICr@(kx!gV8f?3qimFOIzc8%<>lvG1vN7{fOn zN^hJQnbw-mF*=%$Kye(FAweL*;v(sU5hVk~2*L$c$4;I+@zmI)8MWrN(4#tru)@LO zI2u>$*03>CiW^C82b#*4`HFHa!`e#0$pF4gdZ4=)H0>@%<^{Vef~vs|pU-n-xyDA4 z2oD&Cdu?(!J`~@~VQhoe?5J|q@HdU0{{oT?N0r{1@WtkA(yBeZ?-b*%?<3ra+{ZCA zvi^0n*&*w9jt88e-~REp|M2Z!o?R*})W@E`H2d~1tbOnsURzsOJHPgwYcIb2_mKIy z@BH$2zxi!{`1a@5>T6H0O|O0H+rRMkPkzIB3?Hg@4d9?J4P7hwX8d4$ZWQy^Y8X1Q z$$ECl?BirKv#+&7s^w8l57s}lMK>O#c9md|exHLRYd|7_cnw2|`!@CHlL1=ak{uvU zIXgh-aUlaF%OO4aL`x8Z#YnpJo=9@zyrcqp3Wz7JmykCGq+TEiVrBxA7@aBL?qY&gC%(R{i;mTlg6Z=4J4JT#jk2fCPz>XIT6kX(bUPvQ&cL{fl@zEI$7n2k&1n z`Ik(7iOIiW@_8g+H-x(<*t9V4DDRqRQEdaoySUpL5d2#&w@(SlsIAnO!2C$sR^fyE!WgtvKoD@N#e~B&Vj8O}grulI06EB4GyNgA z)#f{ZM*zy?BF+$a^}t3%r%{BTTGfv-emIiOx|YljZD6Y$u~3k7AAj*$#*dMP9}eu$#*gNZYG~%^1V#H zkIDBl`2i+>g~?xK^21F2I+MS_1F(yC3QA^A=GeD*Y*PQ=o~>7D7p^sb?SbP561 z?@1@par_eL?di1qhSK~OPwz_KmEN7+mL5p&K^^|Z@k`-+0RIv=<8hmmV?OUCSPbO` z@C254A9A;+_owem-p!WFmxcmS=pOaCBU6jyGOn@-h=anKs^V%6%PP-zYS>Mo4Isr#9%e zwodeodrN~3|2XOvFc0FQ4H%qJWJNHg4&5#yjvzL9eIl#lOx8zK zZW^IY2WZ3UhyYu!qVe^;4}W8et4i0vn|P;i)yaoGd|GaIU))HJ^zD9EW!v>69+$>X zx=!@$?0N^II0Qy< zhP+=!PCriXKQMm_oSye9sL(wU+FW~QoL3jfZ*aW!pv{Q*t90>tAM87N@9BYdjPc<2 zhTa)e>@jK~ry$|>CA{YLYZ2j8pkJ~-b273 zDb5r9Hrb6dQmI>ot)OP9pE)KUW*D~DA@xC*9U3ELu9XwoMQmn%e zt$8;I%N@-d4?QS2uwMY8Z2WpkWHaNJG6Bcp&0;N~gpID$OqW5<$M9?JhZ@$$m#@dm z)kS;`9-oKjK8h^Wcf?D0xUY=dlZ-eZ?FoRov{<(aEW? zqfd?g&V3q$xPts*??yfvE>a6 zRW6R(APc*29`XzCq5>wqX|-0P8-gVRvb*KS31Ff80$}j&n=J1=Td`=@A-p(vtM$SQ z&EY2K6e$3l3Ig#3-yoqic+#iJqdykvf)Z}|J;*U8{DYNaN{GAIKnz3pX2_S3tgi4U z+t3-(2CJw0B4Asx*|13qDq4}1q8^MG&~tYjq2 z!Di;1Q)ZOh&KSnV>sj`bF~T)`aC!!{9@`YmS|?1VQl_@Vi9RX<^%sZv+5>E1Lh_8I zai$d6`CjBISec+Vlg@O|%+GNIWT{H*BmtP{3;*@?Ex0LO6XXJJP9Tk$T9B4=DBO+v z8E{hKdg^Gaq3?+V?qj`O5T6PZrY3eo_$yQS5_TZLEs+l6Jm-!-9btl{ev!Q-GDN6s z{y^V8LSwq+_AM0n42yIpanZ$`#Sn^09DuMmgm)*wO$FCxZ<0uo2+;S#f)$1)7AynP z5#rzEB5H@(`V!<;+&UPeOhH0oO9)nUEELfq6rvjnZYhC3>d%qIwjsd}$zGYC-7UeOGoYWP~jIJ|bTnoR^j#wI4m$@+#C zwWE_qPaMV8s$iL;tI)dTNq_3&&Xy-ex?R6K{VJiQ6-%=I_g|cPxAE1($roRA4XImx z$>MS-EI+n!O_Dav7MI-R!dUQ0Rltn<22kmgFF7w%i?VUPqm5ur05`?358k2&FlpV{ zbgjQ%XplvjP&6q2*Slajb!S)?ZsMy4<^BMar|8!JHVT`*03oxDA{@V+{Ehy?%w`Cr zm-C)|mbJv@{8kqP`Y-0TS{MSI4R*kTi=j-xkK?3^u0nCU{*we)b+GHxe5e54NZRf% zcT?srK1$x}uEO_P5ra$^R4E^P?f_X5Edu=T0Wai1HcK$2J8Oys9ek5A5U6#OSbj=D zTF6WUoyrb<5f~6@#c@CMfdyVYjQjlDg4AgxMKXt5LmbU^khRB9&Z{8_W@1}T>{+z;SJSyM)qvVVj#80?A$bd{uGif z43w3&5;dm2aF=wGa-~H0uGtj|cw+MI{*`9P}WX1J`55IW|+^_zKhDMB= z8;ti+&^WVQAKwZvbvz%hEQ>5YbuF+mM*mhEt?9+~4^gBws2YgW4kGGyD};3*dwzI8 zr0-zGg}p!c^u>QC+-2>JB_s+Z{6qU&Z2GlkK~JHFKLI^e1_18Y!W8ZOTeR&RM$*Sf zBeu>cc)YgN`LVue6*fBG+huLmrsZGfV}FAQO{LmoEEeS7;^lv2@*_-so5}Am`CTUe znaS@l`7caB7XLdqw+LhkOUg{3PD9HE1VcKseh>bY~A+ znt!Ix73VShjp=4$tM6prLmZ)ev$aqEH)_WY0^tr!(}VB?PC5xE_9sywg7+15);b)U zy-sw7XWNs(?)_}0Sn?1ugRb?w?6>LPkU>rN;wbI?H@rYDrCV4pZZJ1AUx*Mb?G=x@ zy`Xguq1Bc)57cLfg{A6wKSS;}IC4!FMabNU3 znm$6WD0E_gQL7t8R`E-j2_5U?BH4$6+3GWMat3dI)Wd%+*ahl)Rr zPgJc!ciZs-(cGYn9lr!W8fy*0G^0w~3*r4Q`!ez#gK(Uv;sd``Z-Q?Ukn%*Vxv%?e zY)QQ!#=WngDQ*!e_-OYpf_A|TF+)E{o{`cmJ>GsmI=&<2LBAIneLzOGwJnbbAVs%= zAP#wtvrjnC*5CrZo>Q+@%YH76-Aahi1z#7ARxUp89~Y}pZX3?VjcUXr->p%d@}GRg z*&E~;sjmKFInlw}jzrfqU*hs80LUk@G+%cYE?cSU{51xGJjOh-GlDwu;-T@hDA!77 xmaD3PkN 0: + feature = features.pop() + we_have_the_feature = self.builders_for_feature.get(feature, []) + if len(we_have_the_feature) > 0: + if candidates is None: + candidates = we_have_the_feature + candidate_set = set(candidates) + else: + # Eliminate any candidates that don't have this feature. + candidate_set = candidate_set.intersection( + set(we_have_the_feature)) + + # The only valid candidates are the ones in candidate_set. + # Go through the original list of candidates and pick the first one + # that's in candidate_set. + if candidate_set is None: + return None + for candidate in candidates: + if candidate in candidate_set: + return candidate + return None + +# The BeautifulSoup class will take feature lists from developers and use them +# to look up builders in this registry. +builder_registry = TreeBuilderRegistry() + +class TreeBuilder(object): + """Turn a textual document into a Beautiful Soup object tree.""" + + NAME = "[Unknown tree builder]" + ALTERNATE_NAMES = [] + features = [] + + is_xml = False + picklable = False + empty_element_tags = None # A tag will be considered an empty-element + # tag when and only when it has no contents. + + # A value for these tag/attribute combinations is a space- or + # comma-separated list of CDATA, rather than a single CDATA. + DEFAULT_CDATA_LIST_ATTRIBUTES = {} + + # Whitespace should be preserved inside these tags. + DEFAULT_PRESERVE_WHITESPACE_TAGS = set() + + # The textual contents of tags with these names should be + # instantiated with some class other than NavigableString. + DEFAULT_STRING_CONTAINERS = {} + + USE_DEFAULT = object() + + # Most parsers don't keep track of line numbers. + TRACKS_LINE_NUMBERS = False + + def __init__(self, multi_valued_attributes=USE_DEFAULT, + preserve_whitespace_tags=USE_DEFAULT, + store_line_numbers=USE_DEFAULT, + string_containers=USE_DEFAULT, + ): + """Constructor. + + :param multi_valued_attributes: If this is set to None, the + TreeBuilder will not turn any values for attributes like + 'class' into lists. Setting this to a dictionary will + customize this behavior; look at DEFAULT_CDATA_LIST_ATTRIBUTES + for an example. + + Internally, these are called "CDATA list attributes", but that + probably doesn't make sense to an end-user, so the argument name + is `multi_valued_attributes`. + + :param preserve_whitespace_tags: A list of tags to treat + the way

 tags are treated in HTML. Tags in this list
+         are immune from pretty-printing; their contents will always be
+         output as-is.
+
+        :param string_containers: A dictionary mapping tag names to
+        the classes that should be instantiated to contain the textual
+        contents of those tags. The default is to use NavigableString
+        for every tag, no matter what the name. You can override the
+        default by changing DEFAULT_STRING_CONTAINERS.
+
+        :param store_line_numbers: If the parser keeps track of the
+         line numbers and positions of the original markup, that
+         information will, by default, be stored in each corresponding
+         `Tag` object. You can turn this off by passing
+         store_line_numbers=False. If the parser you're using doesn't 
+         keep track of this information, then setting store_line_numbers=True
+         will do nothing.
+        """
+        self.soup = None
+        if multi_valued_attributes is self.USE_DEFAULT:
+            multi_valued_attributes = self.DEFAULT_CDATA_LIST_ATTRIBUTES
+        self.cdata_list_attributes = multi_valued_attributes
+        if preserve_whitespace_tags is self.USE_DEFAULT:
+            preserve_whitespace_tags = self.DEFAULT_PRESERVE_WHITESPACE_TAGS
+        self.preserve_whitespace_tags = preserve_whitespace_tags
+        if store_line_numbers == self.USE_DEFAULT:
+            store_line_numbers = self.TRACKS_LINE_NUMBERS
+        self.store_line_numbers = store_line_numbers 
+        if string_containers == self.USE_DEFAULT:
+            string_containers = self.DEFAULT_STRING_CONTAINERS
+        self.string_containers = string_containers
+        
+    def initialize_soup(self, soup):
+        """The BeautifulSoup object has been initialized and is now
+        being associated with the TreeBuilder.
+
+        :param soup: A BeautifulSoup object.
+        """
+        self.soup = soup
+        
+    def reset(self):
+        """Do any work necessary to reset the underlying parser
+        for a new document.
+
+        By default, this does nothing.
+        """
+        pass
+
+    def can_be_empty_element(self, tag_name):
+        """Might a tag with this name be an empty-element tag?
+
+        The final markup may or may not actually present this tag as
+        self-closing.
+
+        For instance: an HTMLBuilder does not consider a 

tag to be + an empty-element tag (it's not in + HTMLBuilder.empty_element_tags). This means an empty

tag + will be presented as "

", not "

" or "

". + + The default implementation has no opinion about which tags are + empty-element tags, so a tag will be presented as an + empty-element tag if and only if it has no children. + "" will become "", and "bar" will + be left alone. + + :param tag_name: The name of a markup tag. + """ + if self.empty_element_tags is None: + return True + return tag_name in self.empty_element_tags + + def feed(self, markup): + """Run some incoming markup through some parsing process, + populating the `BeautifulSoup` object in self.soup. + + This method is not implemented in TreeBuilder; it must be + implemented in subclasses. + + :return: None. + """ + raise NotImplementedError() + + def prepare_markup(self, markup, user_specified_encoding=None, + document_declared_encoding=None, exclude_encodings=None): + """Run any preliminary steps necessary to make incoming markup + acceptable to the parser. + + :param markup: Some markup -- probably a bytestring. + :param user_specified_encoding: The user asked to try this encoding. + :param document_declared_encoding: The markup itself claims to be + in this encoding. + :param exclude_encodings: The user asked _not_ to try any of + these encodings. + + :yield: A series of 4-tuples: + (markup, encoding, declared encoding, + has undergone character replacement) + + Each 4-tuple represents a strategy for converting the + document to Unicode and parsing it. Each strategy will be tried + in turn. + + By default, the only strategy is to parse the markup + as-is. See `LXMLTreeBuilderForXML` and + `HTMLParserTreeBuilder` for implementations that take into + account the quirks of particular parsers. + """ + yield markup, None, None, False + + def test_fragment_to_document(self, fragment): + """Wrap an HTML fragment to make it look like a document. + + Different parsers do this differently. For instance, lxml + introduces an empty tag, and html5lib + doesn't. Abstracting this away lets us write simple tests + which run HTML fragments through the parser and compare the + results against other HTML fragments. + + This method should not be used outside of tests. + + :param fragment: A string -- fragment of HTML. + :return: A string -- a full HTML document. + """ + return fragment + + def set_up_substitutions(self, tag): + """Set up any substitutions that will need to be performed on + a `Tag` when it's output as a string. + + By default, this does nothing. See `HTMLTreeBuilder` for a + case where this is used. + + :param tag: A `Tag` + :return: Whether or not a substitution was performed. + """ + return False + + def _replace_cdata_list_attribute_values(self, tag_name, attrs): + """When an attribute value is associated with a tag that can + have multiple values for that attribute, convert the string + value to a list of strings. + + Basically, replaces class="foo bar" with class=["foo", "bar"] + + NOTE: This method modifies its input in place. + + :param tag_name: The name of a tag. + :param attrs: A dictionary containing the tag's attributes. + Any appropriate attribute values will be modified in place. + """ + if not attrs: + return attrs + if self.cdata_list_attributes: + universal = self.cdata_list_attributes.get('*', []) + tag_specific = self.cdata_list_attributes.get( + tag_name.lower(), None) + for attr in list(attrs.keys()): + if attr in universal or (tag_specific and attr in tag_specific): + # We have a "class"-type attribute whose string + # value is a whitespace-separated list of + # values. Split it into a list. + value = attrs[attr] + if isinstance(value, str): + values = nonwhitespace_re.findall(value) + else: + # html5lib sometimes calls setAttributes twice + # for the same tag when rearranging the parse + # tree. On the second call the attribute value + # here is already a list. If this happens, + # leave the value alone rather than trying to + # split it again. + values = value + attrs[attr] = values + return attrs + +class SAXTreeBuilder(TreeBuilder): + """A Beautiful Soup treebuilder that listens for SAX events. + + This is not currently used for anything, but it demonstrates + how a simple TreeBuilder would work. + """ + + def feed(self, markup): + raise NotImplementedError() + + def close(self): + pass + + def startElement(self, name, attrs): + attrs = dict((key[1], value) for key, value in list(attrs.items())) + #print("Start %s, %r" % (name, attrs)) + self.soup.handle_starttag(name, attrs) + + def endElement(self, name): + #print("End %s" % name) + self.soup.handle_endtag(name) + + def startElementNS(self, nsTuple, nodeName, attrs): + # Throw away (ns, nodeName) for now. + self.startElement(nodeName, attrs) + + def endElementNS(self, nsTuple, nodeName): + # Throw away (ns, nodeName) for now. + self.endElement(nodeName) + #handler.endElementNS((ns, node.nodeName), node.nodeName) + + def startPrefixMapping(self, prefix, nodeValue): + # Ignore the prefix for now. + pass + + def endPrefixMapping(self, prefix): + # Ignore the prefix for now. + # handler.endPrefixMapping(prefix) + pass + + def characters(self, content): + self.soup.handle_data(content) + + def startDocument(self): + pass + + def endDocument(self): + pass + + +class HTMLTreeBuilder(TreeBuilder): + """This TreeBuilder knows facts about HTML. + + Such as which tags are empty-element tags. + """ + + empty_element_tags = set([ + # These are from HTML5. + 'area', 'base', 'br', 'col', 'embed', 'hr', 'img', 'input', 'keygen', 'link', 'menuitem', 'meta', 'param', 'source', 'track', 'wbr', + + # These are from earlier versions of HTML and are removed in HTML5. + 'basefont', 'bgsound', 'command', 'frame', 'image', 'isindex', 'nextid', 'spacer' + ]) + + # The HTML standard defines these as block-level elements. Beautiful + # Soup does not treat these elements differently from other elements, + # but it may do so eventually, and this information is available if + # you need to use it. + block_elements = set(["address", "article", "aside", "blockquote", "canvas", "dd", "div", "dl", "dt", "fieldset", "figcaption", "figure", "footer", "form", "h1", "h2", "h3", "h4", "h5", "h6", "header", "hr", "li", "main", "nav", "noscript", "ol", "output", "p", "pre", "section", "table", "tfoot", "ul", "video"]) + + # The HTML standard defines an unusual content model for these tags. + # We represent this by using a string class other than NavigableString + # inside these tags. + # + # I made this list by going through the HTML spec + # (https://html.spec.whatwg.org/#metadata-content) and looking for + # "metadata content" elements that can contain strings. + # + # TODO: Arguably

as a +# string. +# +# XXX This code can be removed once most Python 3 users are on 3.2.3. +if major == 3 and minor == 2 and not CONSTRUCTOR_TAKES_STRICT: + import re + attrfind_tolerant = re.compile( + r'\s*((?<=[\'"\s])[^\s/>][^\s/=>]*)(\s*=+\s*' + r'(\'[^\']*\'|"[^"]*"|(?![\'"])[^>\s]*))?') + HTMLParserTreeBuilder.attrfind_tolerant = attrfind_tolerant + + locatestarttagend = re.compile(r""" + <[a-zA-Z][-.a-zA-Z0-9:_]* # tag name + (?:\s+ # whitespace before attribute name + (?:[a-zA-Z_][-.:a-zA-Z0-9_]* # attribute name + (?:\s*=\s* # value indicator + (?:'[^']*' # LITA-enclosed value + |\"[^\"]*\" # LIT-enclosed value + |[^'\">\s]+ # bare value + ) + )? + ) + )* + \s* # trailing whitespace +""", re.VERBOSE) + BeautifulSoupHTMLParser.locatestarttagend = locatestarttagend + + from html.parser import tagfind, attrfind + + def parse_starttag(self, i): + self.__starttag_text = None + endpos = self.check_for_whole_start_tag(i) + if endpos < 0: + return endpos + rawdata = self.rawdata + self.__starttag_text = rawdata[i:endpos] + + # Now parse the data between i+1 and j into a tag and attrs + attrs = [] + match = tagfind.match(rawdata, i+1) + assert match, 'unexpected call to parse_starttag()' + k = match.end() + self.lasttag = tag = rawdata[i+1:k].lower() + while k < endpos: + if self.strict: + m = attrfind.match(rawdata, k) + else: + m = attrfind_tolerant.match(rawdata, k) + if not m: + break + attrname, rest, attrvalue = m.group(1, 2, 3) + if not rest: + attrvalue = None + elif attrvalue[:1] == '\'' == attrvalue[-1:] or \ + attrvalue[:1] == '"' == attrvalue[-1:]: + attrvalue = attrvalue[1:-1] + if attrvalue: + attrvalue = self.unescape(attrvalue) + attrs.append((attrname.lower(), attrvalue)) + k = m.end() + + end = rawdata[k:endpos].strip() + if end not in (">", "/>"): + lineno, offset = self.getpos() + if "\n" in self.__starttag_text: + lineno = lineno + self.__starttag_text.count("\n") + offset = len(self.__starttag_text) \ + - self.__starttag_text.rfind("\n") + else: + offset = offset + len(self.__starttag_text) + if self.strict: + self.error("junk characters in start tag: %r" + % (rawdata[k:endpos][:20],)) + self.handle_data(rawdata[i:endpos]) + return endpos + if end.endswith('/>'): + # XHTML-style empty tag: + self.handle_startendtag(tag, attrs) + else: + self.handle_starttag(tag, attrs) + if tag in self.CDATA_CONTENT_ELEMENTS: + self.set_cdata_mode(tag) + return endpos + + def set_cdata_mode(self, elem): + self.cdata_elem = elem.lower() + self.interesting = re.compile(r'' % self.cdata_elem, re.I) + + BeautifulSoupHTMLParser.parse_starttag = parse_starttag + BeautifulSoupHTMLParser.set_cdata_mode = set_cdata_mode + + CONSTRUCTOR_TAKES_STRICT = True diff --git a/venv/Lib/site-packages/bs4/builder/_lxml.py b/venv/Lib/site-packages/bs4/builder/_lxml.py new file mode 100644 index 0000000..432a2c8 --- /dev/null +++ b/venv/Lib/site-packages/bs4/builder/_lxml.py @@ -0,0 +1,332 @@ +# Use of this source code is governed by the MIT license. +__license__ = "MIT" + +__all__ = [ + 'LXMLTreeBuilderForXML', + 'LXMLTreeBuilder', + ] + +try: + from collections.abc import Callable # Python 3.6 +except ImportError as e: + from collections import Callable + +from io import BytesIO +from io import StringIO +from lxml import etree +from bs4.element import ( + Comment, + Doctype, + NamespacedAttribute, + ProcessingInstruction, + XMLProcessingInstruction, +) +from bs4.builder import ( + FAST, + HTML, + HTMLTreeBuilder, + PERMISSIVE, + ParserRejectedMarkup, + TreeBuilder, + XML) +from bs4.dammit import EncodingDetector + +LXML = 'lxml' + +def _invert(d): + "Invert a dictionary." + return dict((v,k) for k, v in list(d.items())) + +class LXMLTreeBuilderForXML(TreeBuilder): + DEFAULT_PARSER_CLASS = etree.XMLParser + + is_xml = True + processing_instruction_class = XMLProcessingInstruction + + NAME = "lxml-xml" + ALTERNATE_NAMES = ["xml"] + + # Well, it's permissive by XML parser standards. + features = [NAME, LXML, XML, FAST, PERMISSIVE] + + CHUNK_SIZE = 512 + + # This namespace mapping is specified in the XML Namespace + # standard. + DEFAULT_NSMAPS = dict(xml='http://www.w3.org/XML/1998/namespace') + + DEFAULT_NSMAPS_INVERTED = _invert(DEFAULT_NSMAPS) + + # NOTE: If we parsed Element objects and looked at .sourceline, + # we'd be able to see the line numbers from the original document. + # But instead we build an XMLParser or HTMLParser object to serve + # as the target of parse messages, and those messages don't include + # line numbers. + # See: https://bugs.launchpad.net/lxml/+bug/1846906 + + def initialize_soup(self, soup): + """Let the BeautifulSoup object know about the standard namespace + mapping. + + :param soup: A `BeautifulSoup`. + """ + super(LXMLTreeBuilderForXML, self).initialize_soup(soup) + self._register_namespaces(self.DEFAULT_NSMAPS) + + def _register_namespaces(self, mapping): + """Let the BeautifulSoup object know about namespaces encountered + while parsing the document. + + This might be useful later on when creating CSS selectors. + + :param mapping: A dictionary mapping namespace prefixes to URIs. + """ + for key, value in list(mapping.items()): + if key and key not in self.soup._namespaces: + # Let the BeautifulSoup object know about a new namespace. + # If there are multiple namespaces defined with the same + # prefix, the first one in the document takes precedence. + self.soup._namespaces[key] = value + + def default_parser(self, encoding): + """Find the default parser for the given encoding. + + :param encoding: A string. + :return: Either a parser object or a class, which + will be instantiated with default arguments. + """ + if self._default_parser is not None: + return self._default_parser + return etree.XMLParser( + target=self, strip_cdata=False, recover=True, encoding=encoding) + + def parser_for(self, encoding): + """Instantiate an appropriate parser for the given encoding. + + :param encoding: A string. + :return: A parser object such as an `etree.XMLParser`. + """ + # Use the default parser. + parser = self.default_parser(encoding) + + if isinstance(parser, Callable): + # Instantiate the parser with default arguments + parser = parser( + target=self, strip_cdata=False, recover=True, encoding=encoding + ) + return parser + + def __init__(self, parser=None, empty_element_tags=None, **kwargs): + # TODO: Issue a warning if parser is present but not a + # callable, since that means there's no way to create new + # parsers for different encodings. + self._default_parser = parser + if empty_element_tags is not None: + self.empty_element_tags = set(empty_element_tags) + self.soup = None + self.nsmaps = [self.DEFAULT_NSMAPS_INVERTED] + super(LXMLTreeBuilderForXML, self).__init__(**kwargs) + + def _getNsTag(self, tag): + # Split the namespace URL out of a fully-qualified lxml tag + # name. Copied from lxml's src/lxml/sax.py. + if tag[0] == '{': + return tuple(tag[1:].split('}', 1)) + else: + return (None, tag) + + def prepare_markup(self, markup, user_specified_encoding=None, + exclude_encodings=None, + document_declared_encoding=None): + """Run any preliminary steps necessary to make incoming markup + acceptable to the parser. + + lxml really wants to get a bytestring and convert it to + Unicode itself. So instead of using UnicodeDammit to convert + the bytestring to Unicode using different encodings, this + implementation uses EncodingDetector to iterate over the + encodings, and tell lxml to try to parse the document as each + one in turn. + + :param markup: Some markup -- hopefully a bytestring. + :param user_specified_encoding: The user asked to try this encoding. + :param document_declared_encoding: The markup itself claims to be + in this encoding. + :param exclude_encodings: The user asked _not_ to try any of + these encodings. + + :yield: A series of 4-tuples: + (markup, encoding, declared encoding, + has undergone character replacement) + + Each 4-tuple represents a strategy for converting the + document to Unicode and parsing it. Each strategy will be tried + in turn. + """ + is_html = not self.is_xml + if is_html: + self.processing_instruction_class = ProcessingInstruction + else: + self.processing_instruction_class = XMLProcessingInstruction + + if isinstance(markup, str): + # We were given Unicode. Maybe lxml can parse Unicode on + # this system? + yield markup, None, document_declared_encoding, False + + if isinstance(markup, str): + # No, apparently not. Convert the Unicode to UTF-8 and + # tell lxml to parse it as UTF-8. + yield (markup.encode("utf8"), "utf8", + document_declared_encoding, False) + + try_encodings = [user_specified_encoding, document_declared_encoding] + detector = EncodingDetector( + markup, try_encodings, is_html, exclude_encodings) + for encoding in detector.encodings: + yield (detector.markup, encoding, document_declared_encoding, False) + + def feed(self, markup): + if isinstance(markup, bytes): + markup = BytesIO(markup) + elif isinstance(markup, str): + markup = StringIO(markup) + + # Call feed() at least once, even if the markup is empty, + # or the parser won't be initialized. + data = markup.read(self.CHUNK_SIZE) + try: + self.parser = self.parser_for(self.soup.original_encoding) + self.parser.feed(data) + while len(data) != 0: + # Now call feed() on the rest of the data, chunk by chunk. + data = markup.read(self.CHUNK_SIZE) + if len(data) != 0: + self.parser.feed(data) + self.parser.close() + except (UnicodeDecodeError, LookupError, etree.ParserError) as e: + raise ParserRejectedMarkup(e) + + def close(self): + self.nsmaps = [self.DEFAULT_NSMAPS_INVERTED] + + def start(self, name, attrs, nsmap={}): + # Make sure attrs is a mutable dict--lxml may send an immutable dictproxy. + attrs = dict(attrs) + nsprefix = None + # Invert each namespace map as it comes in. + if len(nsmap) == 0 and len(self.nsmaps) > 1: + # There are no new namespaces for this tag, but + # non-default namespaces are in play, so we need a + # separate tag stack to know when they end. + self.nsmaps.append(None) + elif len(nsmap) > 0: + # A new namespace mapping has come into play. + + # First, Let the BeautifulSoup object know about it. + self._register_namespaces(nsmap) + + # Then, add it to our running list of inverted namespace + # mappings. + self.nsmaps.append(_invert(nsmap)) + + # Also treat the namespace mapping as a set of attributes on the + # tag, so we can recreate it later. + attrs = attrs.copy() + for prefix, namespace in list(nsmap.items()): + attribute = NamespacedAttribute( + "xmlns", prefix, "http://www.w3.org/2000/xmlns/") + attrs[attribute] = namespace + + # Namespaces are in play. Find any attributes that came in + # from lxml with namespaces attached to their names, and + # turn then into NamespacedAttribute objects. + new_attrs = {} + for attr, value in list(attrs.items()): + namespace, attr = self._getNsTag(attr) + if namespace is None: + new_attrs[attr] = value + else: + nsprefix = self._prefix_for_namespace(namespace) + attr = NamespacedAttribute(nsprefix, attr, namespace) + new_attrs[attr] = value + attrs = new_attrs + + namespace, name = self._getNsTag(name) + nsprefix = self._prefix_for_namespace(namespace) + self.soup.handle_starttag(name, namespace, nsprefix, attrs) + + def _prefix_for_namespace(self, namespace): + """Find the currently active prefix for the given namespace.""" + if namespace is None: + return None + for inverted_nsmap in reversed(self.nsmaps): + if inverted_nsmap is not None and namespace in inverted_nsmap: + return inverted_nsmap[namespace] + return None + + def end(self, name): + self.soup.endData() + completed_tag = self.soup.tagStack[-1] + namespace, name = self._getNsTag(name) + nsprefix = None + if namespace is not None: + for inverted_nsmap in reversed(self.nsmaps): + if inverted_nsmap is not None and namespace in inverted_nsmap: + nsprefix = inverted_nsmap[namespace] + break + self.soup.handle_endtag(name, nsprefix) + if len(self.nsmaps) > 1: + # This tag, or one of its parents, introduced a namespace + # mapping, so pop it off the stack. + self.nsmaps.pop() + + def pi(self, target, data): + self.soup.endData() + self.soup.handle_data(target + ' ' + data) + self.soup.endData(self.processing_instruction_class) + + def data(self, content): + self.soup.handle_data(content) + + def doctype(self, name, pubid, system): + self.soup.endData() + doctype = Doctype.for_name_and_ids(name, pubid, system) + self.soup.object_was_parsed(doctype) + + def comment(self, content): + "Handle comments as Comment objects." + self.soup.endData() + self.soup.handle_data(content) + self.soup.endData(Comment) + + def test_fragment_to_document(self, fragment): + """See `TreeBuilder`.""" + return '\n%s' % fragment + + +class LXMLTreeBuilder(HTMLTreeBuilder, LXMLTreeBuilderForXML): + + NAME = LXML + ALTERNATE_NAMES = ["lxml-html"] + + features = ALTERNATE_NAMES + [NAME, HTML, FAST, PERMISSIVE] + is_xml = False + processing_instruction_class = ProcessingInstruction + + def default_parser(self, encoding): + return etree.HTMLParser + + def feed(self, markup): + encoding = self.soup.original_encoding + try: + self.parser = self.parser_for(encoding) + self.parser.feed(markup) + self.parser.close() + except (UnicodeDecodeError, LookupError, etree.ParserError) as e: + raise ParserRejectedMarkup(e) + + + def test_fragment_to_document(self, fragment): + """See `TreeBuilder`.""" + return '%s' % fragment diff --git a/venv/Lib/site-packages/bs4/dammit.py b/venv/Lib/site-packages/bs4/dammit.py new file mode 100644 index 0000000..ee3708f --- /dev/null +++ b/venv/Lib/site-packages/bs4/dammit.py @@ -0,0 +1,939 @@ +# -*- coding: utf-8 -*- +"""Beautiful Soup bonus library: Unicode, Dammit + +This library converts a bytestream to Unicode through any means +necessary. It is heavily based on code from Mark Pilgrim's Universal +Feed Parser. It works best on XML and HTML, but it does not rewrite the +XML or HTML to reflect a new encoding; that's the tree builder's job. +""" +# Use of this source code is governed by the MIT license. +__license__ = "MIT" + +import codecs +from html.entities import codepoint2name +import re +import logging +import string + +# Import a library to autodetect character encodings. +chardet_type = None +try: + # First try the fast C implementation. + # PyPI package: cchardet + import cchardet + def chardet_dammit(s): + if isinstance(s, str): + return None + return cchardet.detect(s)['encoding'] +except ImportError: + try: + # Fall back to the pure Python implementation + # Debian package: python-chardet + # PyPI package: chardet + import chardet + def chardet_dammit(s): + if isinstance(s, str): + return None + return chardet.detect(s)['encoding'] + #import chardet.constants + #chardet.constants._debug = 1 + except ImportError: + # No chardet available. + def chardet_dammit(s): + return None + +# Available from http://cjkpython.i18n.org/. +# +# TODO: This doesn't work anymore and the closest thing, iconv_codecs, +# is GPL-licensed. Check whether this is still necessary. +try: + import iconv_codec +except ImportError: + pass + +# Build bytestring and Unicode versions of regular expressions for finding +# a declared encoding inside an XML or HTML document. +xml_encoding = '^\\s*<\\?.*encoding=[\'"](.*?)[\'"].*\\?>' +html_meta = '<\\s*meta[^>]+charset\\s*=\\s*["\']?([^>]*?)[ /;\'">]' +encoding_res = dict() +encoding_res[bytes] = { + 'html' : re.compile(html_meta.encode("ascii"), re.I), + 'xml' : re.compile(xml_encoding.encode("ascii"), re.I), +} +encoding_res[str] = { + 'html' : re.compile(html_meta, re.I), + 'xml' : re.compile(xml_encoding, re.I) +} + +class EntitySubstitution(object): + """The ability to substitute XML or HTML entities for certain characters.""" + + def _populate_class_variables(): + lookup = {} + reverse_lookup = {} + characters_for_re = [] + + # &apos is an XHTML entity and an HTML 5, but not an HTML 4 + # entity. We don't want to use it, but we want to recognize it on the way in. + # + # TODO: Ideally we would be able to recognize all HTML 5 named + # entities, but that's a little tricky. + extra = [(39, 'apos')] + for codepoint, name in list(codepoint2name.items()) + extra: + character = chr(codepoint) + if codepoint not in (34, 39): + # There's no point in turning the quotation mark into + # " or the single quote into ', unless it + # happens within an attribute value, which is handled + # elsewhere. + characters_for_re.append(character) + lookup[character] = name + # But we do want to recognize those entities on the way in and + # convert them to Unicode characters. + reverse_lookup[name] = character + re_definition = "[%s]" % "".join(characters_for_re) + return lookup, reverse_lookup, re.compile(re_definition) + (CHARACTER_TO_HTML_ENTITY, HTML_ENTITY_TO_CHARACTER, + CHARACTER_TO_HTML_ENTITY_RE) = _populate_class_variables() + + CHARACTER_TO_XML_ENTITY = { + "'": "apos", + '"': "quot", + "&": "amp", + "<": "lt", + ">": "gt", + } + + BARE_AMPERSAND_OR_BRACKET = re.compile("([<>]|" + "&(?!#\\d+;|#x[0-9a-fA-F]+;|\\w+;)" + ")") + + AMPERSAND_OR_BRACKET = re.compile("([<>&])") + + @classmethod + def _substitute_html_entity(cls, matchobj): + """Used with a regular expression to substitute the + appropriate HTML entity for a special character.""" + entity = cls.CHARACTER_TO_HTML_ENTITY.get(matchobj.group(0)) + return "&%s;" % entity + + @classmethod + def _substitute_xml_entity(cls, matchobj): + """Used with a regular expression to substitute the + appropriate XML entity for a special character.""" + entity = cls.CHARACTER_TO_XML_ENTITY[matchobj.group(0)] + return "&%s;" % entity + + @classmethod + def quoted_attribute_value(self, value): + """Make a value into a quoted XML attribute, possibly escaping it. + + Most strings will be quoted using double quotes. + + Bob's Bar -> "Bob's Bar" + + If a string contains double quotes, it will be quoted using + single quotes. + + Welcome to "my bar" -> 'Welcome to "my bar"' + + If a string contains both single and double quotes, the + double quotes will be escaped, and the string will be quoted + using double quotes. + + Welcome to "Bob's Bar" -> "Welcome to "Bob's bar" + """ + quote_with = '"' + if '"' in value: + if "'" in value: + # The string contains both single and double + # quotes. Turn the double quotes into + # entities. We quote the double quotes rather than + # the single quotes because the entity name is + # """ whether this is HTML or XML. If we + # quoted the single quotes, we'd have to decide + # between ' and &squot;. + replace_with = """ + value = value.replace('"', replace_with) + else: + # There are double quotes but no single quotes. + # We can use single quotes to quote the attribute. + quote_with = "'" + return quote_with + value + quote_with + + @classmethod + def substitute_xml(cls, value, make_quoted_attribute=False): + """Substitute XML entities for special XML characters. + + :param value: A string to be substituted. The less-than sign + will become <, the greater-than sign will become >, + and any ampersands will become &. If you want ampersands + that appear to be part of an entity definition to be left + alone, use substitute_xml_containing_entities() instead. + + :param make_quoted_attribute: If True, then the string will be + quoted, as befits an attribute value. + """ + # Escape angle brackets and ampersands. + value = cls.AMPERSAND_OR_BRACKET.sub( + cls._substitute_xml_entity, value) + + if make_quoted_attribute: + value = cls.quoted_attribute_value(value) + return value + + @classmethod + def substitute_xml_containing_entities( + cls, value, make_quoted_attribute=False): + """Substitute XML entities for special XML characters. + + :param value: A string to be substituted. The less-than sign will + become <, the greater-than sign will become >, and any + ampersands that are not part of an entity defition will + become &. + + :param make_quoted_attribute: If True, then the string will be + quoted, as befits an attribute value. + """ + # Escape angle brackets, and ampersands that aren't part of + # entities. + value = cls.BARE_AMPERSAND_OR_BRACKET.sub( + cls._substitute_xml_entity, value) + + if make_quoted_attribute: + value = cls.quoted_attribute_value(value) + return value + + @classmethod + def substitute_html(cls, s): + """Replace certain Unicode characters with named HTML entities. + + This differs from data.encode(encoding, 'xmlcharrefreplace') + in that the goal is to make the result more readable (to those + with ASCII displays) rather than to recover from + errors. There's absolutely nothing wrong with a UTF-8 string + containg a LATIN SMALL LETTER E WITH ACUTE, but replacing that + character with "é" will make it more readable to some + people. + + :param s: A Unicode string. + """ + return cls.CHARACTER_TO_HTML_ENTITY_RE.sub( + cls._substitute_html_entity, s) + + +class EncodingDetector: + """Suggests a number of possible encodings for a bytestring. + + Order of precedence: + + 1. Encodings you specifically tell EncodingDetector to try first + (the override_encodings argument to the constructor). + + 2. An encoding declared within the bytestring itself, either in an + XML declaration (if the bytestring is to be interpreted as an XML + document), or in a tag (if the bytestring is to be + interpreted as an HTML document.) + + 3. An encoding detected through textual analysis by chardet, + cchardet, or a similar external library. + + 4. UTF-8. + + 5. Windows-1252. + """ + def __init__(self, markup, override_encodings=None, is_html=False, + exclude_encodings=None): + """Constructor. + + :param markup: Some markup in an unknown encoding. + :param override_encodings: These encodings will be tried first. + :param is_html: If True, this markup is considered to be HTML. Otherwise + it's assumed to be XML. + :param exclude_encodings: These encodings will not be tried, even + if they otherwise would be. + """ + self.override_encodings = override_encodings or [] + exclude_encodings = exclude_encodings or [] + self.exclude_encodings = set([x.lower() for x in exclude_encodings]) + self.chardet_encoding = None + self.is_html = is_html + self.declared_encoding = None + + # First order of business: strip a byte-order mark. + self.markup, self.sniffed_encoding = self.strip_byte_order_mark(markup) + + def _usable(self, encoding, tried): + """Should we even bother to try this encoding? + + :param encoding: Name of an encoding. + :param tried: Encodings that have already been tried. This will be modified + as a side effect. + """ + if encoding is not None: + encoding = encoding.lower() + if encoding in self.exclude_encodings: + return False + if encoding not in tried: + tried.add(encoding) + return True + return False + + @property + def encodings(self): + """Yield a number of encodings that might work for this markup. + + :yield: A sequence of strings. + """ + tried = set() + for e in self.override_encodings: + if self._usable(e, tried): + yield e + + # Did the document originally start with a byte-order mark + # that indicated its encoding? + if self._usable(self.sniffed_encoding, tried): + yield self.sniffed_encoding + + # Look within the document for an XML or HTML encoding + # declaration. + if self.declared_encoding is None: + self.declared_encoding = self.find_declared_encoding( + self.markup, self.is_html) + if self._usable(self.declared_encoding, tried): + yield self.declared_encoding + + # Use third-party character set detection to guess at the + # encoding. + if self.chardet_encoding is None: + self.chardet_encoding = chardet_dammit(self.markup) + if self._usable(self.chardet_encoding, tried): + yield self.chardet_encoding + + # As a last-ditch effort, try utf-8 and windows-1252. + for e in ('utf-8', 'windows-1252'): + if self._usable(e, tried): + yield e + + @classmethod + def strip_byte_order_mark(cls, data): + """If a byte-order mark is present, strip it and return the encoding it implies. + + :param data: Some markup. + :return: A 2-tuple (modified data, implied encoding) + """ + encoding = None + if isinstance(data, str): + # Unicode data cannot have a byte-order mark. + return data, encoding + if (len(data) >= 4) and (data[:2] == b'\xfe\xff') \ + and (data[2:4] != '\x00\x00'): + encoding = 'utf-16be' + data = data[2:] + elif (len(data) >= 4) and (data[:2] == b'\xff\xfe') \ + and (data[2:4] != '\x00\x00'): + encoding = 'utf-16le' + data = data[2:] + elif data[:3] == b'\xef\xbb\xbf': + encoding = 'utf-8' + data = data[3:] + elif data[:4] == b'\x00\x00\xfe\xff': + encoding = 'utf-32be' + data = data[4:] + elif data[:4] == b'\xff\xfe\x00\x00': + encoding = 'utf-32le' + data = data[4:] + return data, encoding + + @classmethod + def find_declared_encoding(cls, markup, is_html=False, search_entire_document=False): + """Given a document, tries to find its declared encoding. + + An XML encoding is declared at the beginning of the document. + + An HTML encoding is declared in a tag, hopefully near the + beginning of the document. + + :param markup: Some markup. + :param is_html: If True, this markup is considered to be HTML. Otherwise + it's assumed to be XML. + :param search_entire_document: Since an encoding is supposed to declared near the beginning + of the document, most of the time it's only necessary to search a few kilobytes of data. + Set this to True to force this method to search the entire document. + """ + if search_entire_document: + xml_endpos = html_endpos = len(markup) + else: + xml_endpos = 1024 + html_endpos = max(2048, int(len(markup) * 0.05)) + + if isinstance(markup, bytes): + res = encoding_res[bytes] + else: + res = encoding_res[str] + + xml_re = res['xml'] + html_re = res['html'] + declared_encoding = None + declared_encoding_match = xml_re.search(markup, endpos=xml_endpos) + if not declared_encoding_match and is_html: + declared_encoding_match = html_re.search(markup, endpos=html_endpos) + if declared_encoding_match is not None: + declared_encoding = declared_encoding_match.groups()[0] + if declared_encoding: + if isinstance(declared_encoding, bytes): + declared_encoding = declared_encoding.decode('ascii', 'replace') + return declared_encoding.lower() + return None + +class UnicodeDammit: + """A class for detecting the encoding of a *ML document and + converting it to a Unicode string. If the source encoding is + windows-1252, can replace MS smart quotes with their HTML or XML + equivalents.""" + + # This dictionary maps commonly seen values for "charset" in HTML + # meta tags to the corresponding Python codec names. It only covers + # values that aren't in Python's aliases and can't be determined + # by the heuristics in find_codec. + CHARSET_ALIASES = {"macintosh": "mac-roman", + "x-sjis": "shift-jis"} + + ENCODINGS_WITH_SMART_QUOTES = [ + "windows-1252", + "iso-8859-1", + "iso-8859-2", + ] + + def __init__(self, markup, override_encodings=[], + smart_quotes_to=None, is_html=False, exclude_encodings=[]): + """Constructor. + + :param markup: A bytestring representing markup in an unknown encoding. + :param override_encodings: These encodings will be tried first, + before any sniffing code is run. + + :param smart_quotes_to: By default, Microsoft smart quotes will, like all other characters, be converted + to Unicode characters. Setting this to 'ascii' will convert them to ASCII quotes instead. + Setting it to 'xml' will convert them to XML entity references, and setting it to 'html' + will convert them to HTML entity references. + :param is_html: If True, this markup is considered to be HTML. Otherwise + it's assumed to be XML. + :param exclude_encodings: These encodings will not be considered, even + if the sniffing code thinks they might make sense. + """ + self.smart_quotes_to = smart_quotes_to + self.tried_encodings = [] + self.contains_replacement_characters = False + self.is_html = is_html + self.log = logging.getLogger(__name__) + self.detector = EncodingDetector( + markup, override_encodings, is_html, exclude_encodings) + + # Short-circuit if the data is in Unicode to begin with. + if isinstance(markup, str) or markup == '': + self.markup = markup + self.unicode_markup = str(markup) + self.original_encoding = None + return + + # The encoding detector may have stripped a byte-order mark. + # Use the stripped markup from this point on. + self.markup = self.detector.markup + + u = None + for encoding in self.detector.encodings: + markup = self.detector.markup + u = self._convert_from(encoding) + if u is not None: + break + + if not u: + # None of the encodings worked. As an absolute last resort, + # try them again with character replacement. + + for encoding in self.detector.encodings: + if encoding != "ascii": + u = self._convert_from(encoding, "replace") + if u is not None: + self.log.warning( + "Some characters could not be decoded, and were " + "replaced with REPLACEMENT CHARACTER." + ) + self.contains_replacement_characters = True + break + + # If none of that worked, we could at this point force it to + # ASCII, but that would destroy so much data that I think + # giving up is better. + self.unicode_markup = u + if not u: + self.original_encoding = None + + def _sub_ms_char(self, match): + """Changes a MS smart quote character to an XML or HTML + entity, or an ASCII character.""" + orig = match.group(1) + if self.smart_quotes_to == 'ascii': + sub = self.MS_CHARS_TO_ASCII.get(orig).encode() + else: + sub = self.MS_CHARS.get(orig) + if type(sub) == tuple: + if self.smart_quotes_to == 'xml': + sub = '&#x'.encode() + sub[1].encode() + ';'.encode() + else: + sub = '&'.encode() + sub[0].encode() + ';'.encode() + else: + sub = sub.encode() + return sub + + def _convert_from(self, proposed, errors="strict"): + """Attempt to convert the markup to the proposed encoding. + + :param proposed: The name of a character encoding. + """ + proposed = self.find_codec(proposed) + if not proposed or (proposed, errors) in self.tried_encodings: + return None + self.tried_encodings.append((proposed, errors)) + markup = self.markup + # Convert smart quotes to HTML if coming from an encoding + # that might have them. + if (self.smart_quotes_to is not None + and proposed in self.ENCODINGS_WITH_SMART_QUOTES): + smart_quotes_re = b"([\x80-\x9f])" + smart_quotes_compiled = re.compile(smart_quotes_re) + markup = smart_quotes_compiled.sub(self._sub_ms_char, markup) + + try: + #print("Trying to convert document to %s (errors=%s)" % ( + # proposed, errors)) + u = self._to_unicode(markup, proposed, errors) + self.markup = u + self.original_encoding = proposed + except Exception as e: + #print("That didn't work!") + #print(e) + return None + #print("Correct encoding: %s" % proposed) + return self.markup + + def _to_unicode(self, data, encoding, errors="strict"): + """Given a string and its encoding, decodes the string into Unicode. + + :param encoding: The name of an encoding. + """ + return str(data, encoding, errors) + + @property + def declared_html_encoding(self): + """If the markup is an HTML document, returns the encoding declared _within_ + the document. + """ + if not self.is_html: + return None + return self.detector.declared_encoding + + def find_codec(self, charset): + """Convert the name of a character set to a codec name. + + :param charset: The name of a character set. + :return: The name of a codec. + """ + value = (self._codec(self.CHARSET_ALIASES.get(charset, charset)) + or (charset and self._codec(charset.replace("-", ""))) + or (charset and self._codec(charset.replace("-", "_"))) + or (charset and charset.lower()) + or charset + ) + if value: + return value.lower() + return None + + def _codec(self, charset): + if not charset: + return charset + codec = None + try: + codecs.lookup(charset) + codec = charset + except (LookupError, ValueError): + pass + return codec + + + # A partial mapping of ISO-Latin-1 to HTML entities/XML numeric entities. + MS_CHARS = {b'\x80': ('euro', '20AC'), + b'\x81': ' ', + b'\x82': ('sbquo', '201A'), + b'\x83': ('fnof', '192'), + b'\x84': ('bdquo', '201E'), + b'\x85': ('hellip', '2026'), + b'\x86': ('dagger', '2020'), + b'\x87': ('Dagger', '2021'), + b'\x88': ('circ', '2C6'), + b'\x89': ('permil', '2030'), + b'\x8A': ('Scaron', '160'), + b'\x8B': ('lsaquo', '2039'), + b'\x8C': ('OElig', '152'), + b'\x8D': '?', + b'\x8E': ('#x17D', '17D'), + b'\x8F': '?', + b'\x90': '?', + b'\x91': ('lsquo', '2018'), + b'\x92': ('rsquo', '2019'), + b'\x93': ('ldquo', '201C'), + b'\x94': ('rdquo', '201D'), + b'\x95': ('bull', '2022'), + b'\x96': ('ndash', '2013'), + b'\x97': ('mdash', '2014'), + b'\x98': ('tilde', '2DC'), + b'\x99': ('trade', '2122'), + b'\x9a': ('scaron', '161'), + b'\x9b': ('rsaquo', '203A'), + b'\x9c': ('oelig', '153'), + b'\x9d': '?', + b'\x9e': ('#x17E', '17E'), + b'\x9f': ('Yuml', ''),} + + # A parochial partial mapping of ISO-Latin-1 to ASCII. Contains + # horrors like stripping diacritical marks to turn á into a, but also + # contains non-horrors like turning “ into ". + MS_CHARS_TO_ASCII = { + b'\x80' : 'EUR', + b'\x81' : ' ', + b'\x82' : ',', + b'\x83' : 'f', + b'\x84' : ',,', + b'\x85' : '...', + b'\x86' : '+', + b'\x87' : '++', + b'\x88' : '^', + b'\x89' : '%', + b'\x8a' : 'S', + b'\x8b' : '<', + b'\x8c' : 'OE', + b'\x8d' : '?', + b'\x8e' : 'Z', + b'\x8f' : '?', + b'\x90' : '?', + b'\x91' : "'", + b'\x92' : "'", + b'\x93' : '"', + b'\x94' : '"', + b'\x95' : '*', + b'\x96' : '-', + b'\x97' : '--', + b'\x98' : '~', + b'\x99' : '(TM)', + b'\x9a' : 's', + b'\x9b' : '>', + b'\x9c' : 'oe', + b'\x9d' : '?', + b'\x9e' : 'z', + b'\x9f' : 'Y', + b'\xa0' : ' ', + b'\xa1' : '!', + b'\xa2' : 'c', + b'\xa3' : 'GBP', + b'\xa4' : '$', #This approximation is especially parochial--this is the + #generic currency symbol. + b'\xa5' : 'YEN', + b'\xa6' : '|', + b'\xa7' : 'S', + b'\xa8' : '..', + b'\xa9' : '', + b'\xaa' : '(th)', + b'\xab' : '<<', + b'\xac' : '!', + b'\xad' : ' ', + b'\xae' : '(R)', + b'\xaf' : '-', + b'\xb0' : 'o', + b'\xb1' : '+-', + b'\xb2' : '2', + b'\xb3' : '3', + b'\xb4' : ("'", 'acute'), + b'\xb5' : 'u', + b'\xb6' : 'P', + b'\xb7' : '*', + b'\xb8' : ',', + b'\xb9' : '1', + b'\xba' : '(th)', + b'\xbb' : '>>', + b'\xbc' : '1/4', + b'\xbd' : '1/2', + b'\xbe' : '3/4', + b'\xbf' : '?', + b'\xc0' : 'A', + b'\xc1' : 'A', + b'\xc2' : 'A', + b'\xc3' : 'A', + b'\xc4' : 'A', + b'\xc5' : 'A', + b'\xc6' : 'AE', + b'\xc7' : 'C', + b'\xc8' : 'E', + b'\xc9' : 'E', + b'\xca' : 'E', + b'\xcb' : 'E', + b'\xcc' : 'I', + b'\xcd' : 'I', + b'\xce' : 'I', + b'\xcf' : 'I', + b'\xd0' : 'D', + b'\xd1' : 'N', + b'\xd2' : 'O', + b'\xd3' : 'O', + b'\xd4' : 'O', + b'\xd5' : 'O', + b'\xd6' : 'O', + b'\xd7' : '*', + b'\xd8' : 'O', + b'\xd9' : 'U', + b'\xda' : 'U', + b'\xdb' : 'U', + b'\xdc' : 'U', + b'\xdd' : 'Y', + b'\xde' : 'b', + b'\xdf' : 'B', + b'\xe0' : 'a', + b'\xe1' : 'a', + b'\xe2' : 'a', + b'\xe3' : 'a', + b'\xe4' : 'a', + b'\xe5' : 'a', + b'\xe6' : 'ae', + b'\xe7' : 'c', + b'\xe8' : 'e', + b'\xe9' : 'e', + b'\xea' : 'e', + b'\xeb' : 'e', + b'\xec' : 'i', + b'\xed' : 'i', + b'\xee' : 'i', + b'\xef' : 'i', + b'\xf0' : 'o', + b'\xf1' : 'n', + b'\xf2' : 'o', + b'\xf3' : 'o', + b'\xf4' : 'o', + b'\xf5' : 'o', + b'\xf6' : 'o', + b'\xf7' : '/', + b'\xf8' : 'o', + b'\xf9' : 'u', + b'\xfa' : 'u', + b'\xfb' : 'u', + b'\xfc' : 'u', + b'\xfd' : 'y', + b'\xfe' : 'b', + b'\xff' : 'y', + } + + # A map used when removing rogue Windows-1252/ISO-8859-1 + # characters in otherwise UTF-8 documents. + # + # Note that \x81, \x8d, \x8f, \x90, and \x9d are undefined in + # Windows-1252. + WINDOWS_1252_TO_UTF8 = { + 0x80 : b'\xe2\x82\xac', # € + 0x82 : b'\xe2\x80\x9a', # ‚ + 0x83 : b'\xc6\x92', # Æ’ + 0x84 : b'\xe2\x80\x9e', # „ + 0x85 : b'\xe2\x80\xa6', # … + 0x86 : b'\xe2\x80\xa0', # † + 0x87 : b'\xe2\x80\xa1', # ‡ + 0x88 : b'\xcb\x86', # ˆ + 0x89 : b'\xe2\x80\xb0', # ‰ + 0x8a : b'\xc5\xa0', # Å  + 0x8b : b'\xe2\x80\xb9', # ‹ + 0x8c : b'\xc5\x92', # Å’ + 0x8e : b'\xc5\xbd', # Ž + 0x91 : b'\xe2\x80\x98', # ‘ + 0x92 : b'\xe2\x80\x99', # ’ + 0x93 : b'\xe2\x80\x9c', # “ + 0x94 : b'\xe2\x80\x9d', # †+ 0x95 : b'\xe2\x80\xa2', # • + 0x96 : b'\xe2\x80\x93', # – + 0x97 : b'\xe2\x80\x94', # — + 0x98 : b'\xcb\x9c', # Ëœ + 0x99 : b'\xe2\x84\xa2', # â„¢ + 0x9a : b'\xc5\xa1', # Å¡ + 0x9b : b'\xe2\x80\xba', # › + 0x9c : b'\xc5\x93', # Å“ + 0x9e : b'\xc5\xbe', # ž + 0x9f : b'\xc5\xb8', # Ÿ + 0xa0 : b'\xc2\xa0', #   + 0xa1 : b'\xc2\xa1', # ¡ + 0xa2 : b'\xc2\xa2', # ¢ + 0xa3 : b'\xc2\xa3', # £ + 0xa4 : b'\xc2\xa4', # ¤ + 0xa5 : b'\xc2\xa5', # Â¥ + 0xa6 : b'\xc2\xa6', # ¦ + 0xa7 : b'\xc2\xa7', # § + 0xa8 : b'\xc2\xa8', # ¨ + 0xa9 : b'\xc2\xa9', # © + 0xaa : b'\xc2\xaa', # ª + 0xab : b'\xc2\xab', # « + 0xac : b'\xc2\xac', # ¬ + 0xad : b'\xc2\xad', # ­ + 0xae : b'\xc2\xae', # ® + 0xaf : b'\xc2\xaf', # ¯ + 0xb0 : b'\xc2\xb0', # ° + 0xb1 : b'\xc2\xb1', # ± + 0xb2 : b'\xc2\xb2', # ² + 0xb3 : b'\xc2\xb3', # ³ + 0xb4 : b'\xc2\xb4', # ´ + 0xb5 : b'\xc2\xb5', # µ + 0xb6 : b'\xc2\xb6', # ¶ + 0xb7 : b'\xc2\xb7', # · + 0xb8 : b'\xc2\xb8', # ¸ + 0xb9 : b'\xc2\xb9', # ¹ + 0xba : b'\xc2\xba', # º + 0xbb : b'\xc2\xbb', # » + 0xbc : b'\xc2\xbc', # ¼ + 0xbd : b'\xc2\xbd', # ½ + 0xbe : b'\xc2\xbe', # ¾ + 0xbf : b'\xc2\xbf', # ¿ + 0xc0 : b'\xc3\x80', # À + 0xc1 : b'\xc3\x81', # à + 0xc2 : b'\xc3\x82', #  + 0xc3 : b'\xc3\x83', # à + 0xc4 : b'\xc3\x84', # Ä + 0xc5 : b'\xc3\x85', # Ã… + 0xc6 : b'\xc3\x86', # Æ + 0xc7 : b'\xc3\x87', # Ç + 0xc8 : b'\xc3\x88', # È + 0xc9 : b'\xc3\x89', # É + 0xca : b'\xc3\x8a', # Ê + 0xcb : b'\xc3\x8b', # Ë + 0xcc : b'\xc3\x8c', # ÃŒ + 0xcd : b'\xc3\x8d', # à + 0xce : b'\xc3\x8e', # ÃŽ + 0xcf : b'\xc3\x8f', # à + 0xd0 : b'\xc3\x90', # à + 0xd1 : b'\xc3\x91', # Ñ + 0xd2 : b'\xc3\x92', # Ã’ + 0xd3 : b'\xc3\x93', # Ó + 0xd4 : b'\xc3\x94', # Ô + 0xd5 : b'\xc3\x95', # Õ + 0xd6 : b'\xc3\x96', # Ö + 0xd7 : b'\xc3\x97', # × + 0xd8 : b'\xc3\x98', # Ø + 0xd9 : b'\xc3\x99', # Ù + 0xda : b'\xc3\x9a', # Ú + 0xdb : b'\xc3\x9b', # Û + 0xdc : b'\xc3\x9c', # Ãœ + 0xdd : b'\xc3\x9d', # à + 0xde : b'\xc3\x9e', # Þ + 0xdf : b'\xc3\x9f', # ß + 0xe0 : b'\xc3\xa0', # à + 0xe1 : b'\xa1', # á + 0xe2 : b'\xc3\xa2', # â + 0xe3 : b'\xc3\xa3', # ã + 0xe4 : b'\xc3\xa4', # ä + 0xe5 : b'\xc3\xa5', # Ã¥ + 0xe6 : b'\xc3\xa6', # æ + 0xe7 : b'\xc3\xa7', # ç + 0xe8 : b'\xc3\xa8', # è + 0xe9 : b'\xc3\xa9', # é + 0xea : b'\xc3\xaa', # ê + 0xeb : b'\xc3\xab', # ë + 0xec : b'\xc3\xac', # ì + 0xed : b'\xc3\xad', # í + 0xee : b'\xc3\xae', # î + 0xef : b'\xc3\xaf', # ï + 0xf0 : b'\xc3\xb0', # ð + 0xf1 : b'\xc3\xb1', # ñ + 0xf2 : b'\xc3\xb2', # ò + 0xf3 : b'\xc3\xb3', # ó + 0xf4 : b'\xc3\xb4', # ô + 0xf5 : b'\xc3\xb5', # õ + 0xf6 : b'\xc3\xb6', # ö + 0xf7 : b'\xc3\xb7', # ÷ + 0xf8 : b'\xc3\xb8', # ø + 0xf9 : b'\xc3\xb9', # ù + 0xfa : b'\xc3\xba', # ú + 0xfb : b'\xc3\xbb', # û + 0xfc : b'\xc3\xbc', # ü + 0xfd : b'\xc3\xbd', # ý + 0xfe : b'\xc3\xbe', # þ + } + + MULTIBYTE_MARKERS_AND_SIZES = [ + (0xc2, 0xdf, 2), # 2-byte characters start with a byte C2-DF + (0xe0, 0xef, 3), # 3-byte characters start with E0-EF + (0xf0, 0xf4, 4), # 4-byte characters start with F0-F4 + ] + + FIRST_MULTIBYTE_MARKER = MULTIBYTE_MARKERS_AND_SIZES[0][0] + LAST_MULTIBYTE_MARKER = MULTIBYTE_MARKERS_AND_SIZES[-1][1] + + @classmethod + def detwingle(cls, in_bytes, main_encoding="utf8", + embedded_encoding="windows-1252"): + """Fix characters from one encoding embedded in some other encoding. + + Currently the only situation supported is Windows-1252 (or its + subset ISO-8859-1), embedded in UTF-8. + + :param in_bytes: A bytestring that you suspect contains + characters from multiple encodings. Note that this _must_ + be a bytestring. If you've already converted the document + to Unicode, you're too late. + :param main_encoding: The primary encoding of `in_bytes`. + :param embedded_encoding: The encoding that was used to embed characters + in the main document. + :return: A bytestring in which `embedded_encoding` + characters have been converted to their `main_encoding` + equivalents. + """ + if embedded_encoding.replace('_', '-').lower() not in ( + 'windows-1252', 'windows_1252'): + raise NotImplementedError( + "Windows-1252 and ISO-8859-1 are the only currently supported " + "embedded encodings.") + + if main_encoding.lower() not in ('utf8', 'utf-8'): + raise NotImplementedError( + "UTF-8 is the only currently supported main encoding.") + + byte_chunks = [] + + chunk_start = 0 + pos = 0 + while pos < len(in_bytes): + byte = in_bytes[pos] + if not isinstance(byte, int): + # Python 2.x + byte = ord(byte) + if (byte >= cls.FIRST_MULTIBYTE_MARKER + and byte <= cls.LAST_MULTIBYTE_MARKER): + # This is the start of a UTF-8 multibyte character. Skip + # to the end. + for start, end, size in cls.MULTIBYTE_MARKERS_AND_SIZES: + if byte >= start and byte <= end: + pos += size + break + elif byte >= 0x80 and byte in cls.WINDOWS_1252_TO_UTF8: + # We found a Windows-1252 character! + # Save the string up to this point as a chunk. + byte_chunks.append(in_bytes[chunk_start:pos]) + + # Now translate the Windows-1252 character into UTF-8 + # and add it as another, one-byte chunk. + byte_chunks.append(cls.WINDOWS_1252_TO_UTF8[byte]) + pos += 1 + chunk_start = pos + else: + # Go on to the next character. + pos += 1 + if chunk_start == 0: + # The string is unchanged. + return in_bytes + else: + # Store the final chunk. + byte_chunks.append(in_bytes[chunk_start:]) + return b''.join(byte_chunks) + diff --git a/venv/Lib/site-packages/bs4/diagnose.py b/venv/Lib/site-packages/bs4/diagnose.py new file mode 100644 index 0000000..1877acd --- /dev/null +++ b/venv/Lib/site-packages/bs4/diagnose.py @@ -0,0 +1,242 @@ +"""Diagnostic functions, mainly for use when doing tech support.""" + +# Use of this source code is governed by the MIT license. +__license__ = "MIT" + +import cProfile +from io import StringIO +from html.parser import HTMLParser +import bs4 +from bs4 import BeautifulSoup, __version__ +from bs4.builder import builder_registry + +import os +import pstats +import random +import tempfile +import time +import traceback +import sys +import cProfile + +def diagnose(data): + """Diagnostic suite for isolating common problems. + + :param data: A string containing markup that needs to be explained. + :return: None; diagnostics are printed to standard output. + """ + print(("Diagnostic running on Beautiful Soup %s" % __version__)) + print(("Python version %s" % sys.version)) + + basic_parsers = ["html.parser", "html5lib", "lxml"] + for name in basic_parsers: + for builder in builder_registry.builders: + if name in builder.features: + break + else: + basic_parsers.remove(name) + print(( + "I noticed that %s is not installed. Installing it may help." % + name)) + + if 'lxml' in basic_parsers: + basic_parsers.append("lxml-xml") + try: + from lxml import etree + print(("Found lxml version %s" % ".".join(map(str,etree.LXML_VERSION)))) + except ImportError as e: + print( + "lxml is not installed or couldn't be imported.") + + + if 'html5lib' in basic_parsers: + try: + import html5lib + print(("Found html5lib version %s" % html5lib.__version__)) + except ImportError as e: + print( + "html5lib is not installed or couldn't be imported.") + + if hasattr(data, 'read'): + data = data.read() + elif data.startswith("http:") or data.startswith("https:"): + print(('"%s" looks like a URL. Beautiful Soup is not an HTTP client.' % data)) + print("You need to use some other library to get the document behind the URL, and feed that document to Beautiful Soup.") + return + else: + try: + if os.path.exists(data): + print(('"%s" looks like a filename. Reading data from the file.' % data)) + with open(data) as fp: + data = fp.read() + except ValueError: + # This can happen on some platforms when the 'filename' is + # too long. Assume it's data and not a filename. + pass + print("") + + for parser in basic_parsers: + print(("Trying to parse your markup with %s" % parser)) + success = False + try: + soup = BeautifulSoup(data, features=parser) + success = True + except Exception as e: + print(("%s could not parse the markup." % parser)) + traceback.print_exc() + if success: + print(("Here's what %s did with the markup:" % parser)) + print((soup.prettify())) + + print(("-" * 80)) + +def lxml_trace(data, html=True, **kwargs): + """Print out the lxml events that occur during parsing. + + This lets you see how lxml parses a document when no Beautiful + Soup code is running. You can use this to determine whether + an lxml-specific problem is in Beautiful Soup's lxml tree builders + or in lxml itself. + + :param data: Some markup. + :param html: If True, markup will be parsed with lxml's HTML parser. + if False, lxml's XML parser will be used. + """ + from lxml import etree + for event, element in etree.iterparse(StringIO(data), html=html, **kwargs): + print(("%s, %4s, %s" % (event, element.tag, element.text))) + +class AnnouncingParser(HTMLParser): + """Subclass of HTMLParser that announces parse events, without doing + anything else. + + You can use this to get a picture of how html.parser sees a given + document. The easiest way to do this is to call `htmlparser_trace`. + """ + + def _p(self, s): + print(s) + + def handle_starttag(self, name, attrs): + self._p("%s START" % name) + + def handle_endtag(self, name): + self._p("%s END" % name) + + def handle_data(self, data): + self._p("%s DATA" % data) + + def handle_charref(self, name): + self._p("%s CHARREF" % name) + + def handle_entityref(self, name): + self._p("%s ENTITYREF" % name) + + def handle_comment(self, data): + self._p("%s COMMENT" % data) + + def handle_decl(self, data): + self._p("%s DECL" % data) + + def unknown_decl(self, data): + self._p("%s UNKNOWN-DECL" % data) + + def handle_pi(self, data): + self._p("%s PI" % data) + +def htmlparser_trace(data): + """Print out the HTMLParser events that occur during parsing. + + This lets you see how HTMLParser parses a document when no + Beautiful Soup code is running. + + :param data: Some markup. + """ + parser = AnnouncingParser() + parser.feed(data) + +_vowels = "aeiou" +_consonants = "bcdfghjklmnpqrstvwxyz" + +def rword(length=5): + "Generate a random word-like string." + s = '' + for i in range(length): + if i % 2 == 0: + t = _consonants + else: + t = _vowels + s += random.choice(t) + return s + +def rsentence(length=4): + "Generate a random sentence-like string." + return " ".join(rword(random.randint(4,9)) for i in list(range(length))) + +def rdoc(num_elements=1000): + """Randomly generate an invalid HTML document.""" + tag_names = ['p', 'div', 'span', 'i', 'b', 'script', 'table'] + elements = [] + for i in range(num_elements): + choice = random.randint(0,3) + if choice == 0: + # New tag. + tag_name = random.choice(tag_names) + elements.append("<%s>" % tag_name) + elif choice == 1: + elements.append(rsentence(random.randint(1,4))) + elif choice == 2: + # Close a tag. + tag_name = random.choice(tag_names) + elements.append("" % tag_name) + return "" + "\n".join(elements) + "" + +def benchmark_parsers(num_elements=100000): + """Very basic head-to-head performance benchmark.""" + print(("Comparative parser benchmark on Beautiful Soup %s" % __version__)) + data = rdoc(num_elements) + print(("Generated a large invalid HTML document (%d bytes)." % len(data))) + + for parser in ["lxml", ["lxml", "html"], "html5lib", "html.parser"]: + success = False + try: + a = time.time() + soup = BeautifulSoup(data, parser) + b = time.time() + success = True + except Exception as e: + print(("%s could not parse the markup." % parser)) + traceback.print_exc() + if success: + print(("BS4+%s parsed the markup in %.2fs." % (parser, b-a))) + + from lxml import etree + a = time.time() + etree.HTML(data) + b = time.time() + print(("Raw lxml parsed the markup in %.2fs." % (b-a))) + + import html5lib + parser = html5lib.HTMLParser() + a = time.time() + parser.parse(data) + b = time.time() + print(("Raw html5lib parsed the markup in %.2fs." % (b-a))) + +def profile(num_elements=100000, parser="lxml"): + """Use Python's profiler on a randomly generated document.""" + filehandle = tempfile.NamedTemporaryFile() + filename = filehandle.name + + data = rdoc(num_elements) + vars = dict(bs4=bs4, data=data, parser=parser) + cProfile.runctx('bs4.BeautifulSoup(data, parser)' , vars, vars, filename) + + stats = pstats.Stats(filename) + # stats.strip_dirs() + stats.sort_stats("cumulative") + stats.print_stats('_html5lib|bs4', 50) + +# If this file is run as a script, standard input is diagnosed. +if __name__ == '__main__': + diagnose(sys.stdin.read()) diff --git a/venv/Lib/site-packages/bs4/element.py b/venv/Lib/site-packages/bs4/element.py new file mode 100644 index 0000000..c7dc650 --- /dev/null +++ b/venv/Lib/site-packages/bs4/element.py @@ -0,0 +1,2162 @@ +# Use of this source code is governed by the MIT license. +__license__ = "MIT" + +try: + from collections.abc import Callable # Python 3.6 +except ImportError as e: + from collections import Callable +import re +import sys +import warnings +try: + import soupsieve +except ImportError as e: + soupsieve = None + warnings.warn( + 'The soupsieve package is not installed. CSS selectors cannot be used.' + ) + +from bs4.formatter import ( + Formatter, + HTMLFormatter, + XMLFormatter, +) + +DEFAULT_OUTPUT_ENCODING = "utf-8" +PY3K = (sys.version_info[0] > 2) + +nonwhitespace_re = re.compile(r"\S+") + +# NOTE: This isn't used as of 4.7.0. I'm leaving it for a little bit on +# the off chance someone imported it for their own use. +whitespace_re = re.compile(r"\s+") + +def _alias(attr): + """Alias one attribute name to another for backward compatibility""" + @property + def alias(self): + return getattr(self, attr) + + @alias.setter + def alias(self): + return setattr(self, attr) + return alias + + +# These encodings are recognized by Python (so PageElement.encode +# could theoretically support them) but XML and HTML don't recognize +# them (so they should not show up in an XML or HTML document as that +# document's encoding). +# +# If an XML document is encoded in one of these encodings, no encoding +# will be mentioned in the XML declaration. If an HTML document is +# encoded in one of these encodings, and the HTML document has a +# tag that mentions an encoding, the encoding will be given as +# the empty string. +# +# Source: +# https://docs.python.org/3/library/codecs.html#python-specific-encodings +PYTHON_SPECIFIC_ENCODINGS = set([ + "idna", + "mbcs", + "oem", + "palmos", + "punycode", + "raw_unicode_escape", + "undefined", + "unicode_escape", + "raw-unicode-escape", + "unicode-escape", + "string-escape", + "string_escape", +]) + + +class NamespacedAttribute(str): + """A namespaced string (e.g. 'xml:lang') that remembers the namespace + ('xml') and the name ('lang') that were used to create it. + """ + + def __new__(cls, prefix, name=None, namespace=None): + if not name: + # This is the default namespace. Its name "has no value" + # per https://www.w3.org/TR/xml-names/#defaulting + name = None + + if name is None: + obj = str.__new__(cls, prefix) + elif prefix is None: + # Not really namespaced. + obj = str.__new__(cls, name) + else: + obj = str.__new__(cls, prefix + ":" + name) + obj.prefix = prefix + obj.name = name + obj.namespace = namespace + return obj + +class AttributeValueWithCharsetSubstitution(str): + """A stand-in object for a character encoding specified in HTML.""" + +class CharsetMetaAttributeValue(AttributeValueWithCharsetSubstitution): + """A generic stand-in for the value of a meta tag's 'charset' attribute. + + When Beautiful Soup parses the markup '', the + value of the 'charset' attribute will be one of these objects. + """ + + def __new__(cls, original_value): + obj = str.__new__(cls, original_value) + obj.original_value = original_value + return obj + + def encode(self, encoding): + """When an HTML document is being encoded to a given encoding, the + value of a meta tag's 'charset' is the name of the encoding. + """ + if encoding in PYTHON_SPECIFIC_ENCODINGS: + return '' + return encoding + + +class ContentMetaAttributeValue(AttributeValueWithCharsetSubstitution): + """A generic stand-in for the value of a meta tag's 'content' attribute. + + When Beautiful Soup parses the markup: + + + The value of the 'content' attribute will be one of these objects. + """ + + CHARSET_RE = re.compile(r"((^|;)\s*charset=)([^;]*)", re.M) + + def __new__(cls, original_value): + match = cls.CHARSET_RE.search(original_value) + if match is None: + # No substitution necessary. + return str.__new__(str, original_value) + + obj = str.__new__(cls, original_value) + obj.original_value = original_value + return obj + + def encode(self, encoding): + if encoding in PYTHON_SPECIFIC_ENCODINGS: + return '' + def rewrite(match): + return match.group(1) + encoding + return self.CHARSET_RE.sub(rewrite, self.original_value) + + +class PageElement(object): + """Contains the navigational information for some part of the page: + that is, its current location in the parse tree. + + NavigableString, Tag, etc. are all subclasses of PageElement. + """ + + def setup(self, parent=None, previous_element=None, next_element=None, + previous_sibling=None, next_sibling=None): + """Sets up the initial relations between this element and + other elements. + + :param parent: The parent of this element. + + :param previous_element: The element parsed immediately before + this one. + + :param next_element: The element parsed immediately before + this one. + + :param previous_sibling: The most recently encountered element + on the same level of the parse tree as this one. + + :param previous_sibling: The next element to be encountered + on the same level of the parse tree as this one. + """ + self.parent = parent + + self.previous_element = previous_element + if previous_element is not None: + self.previous_element.next_element = self + + self.next_element = next_element + if self.next_element is not None: + self.next_element.previous_element = self + + self.next_sibling = next_sibling + if self.next_sibling is not None: + self.next_sibling.previous_sibling = self + + if (previous_sibling is None + and self.parent is not None and self.parent.contents): + previous_sibling = self.parent.contents[-1] + + self.previous_sibling = previous_sibling + if previous_sibling is not None: + self.previous_sibling.next_sibling = self + + def format_string(self, s, formatter): + """Format the given string using the given formatter. + + :param s: A string. + :param formatter: A Formatter object, or a string naming one of the standard formatters. + """ + if formatter is None: + return s + if not isinstance(formatter, Formatter): + formatter = self.formatter_for_name(formatter) + output = formatter.substitute(s) + return output + + def formatter_for_name(self, formatter): + """Look up or create a Formatter for the given identifier, + if necessary. + + :param formatter: Can be a Formatter object (used as-is), a + function (used as the entity substitution hook for an + XMLFormatter or HTMLFormatter), or a string (used to look + up an XMLFormatter or HTMLFormatter in the appropriate + registry. + """ + if isinstance(formatter, Formatter): + return formatter + if self._is_xml: + c = XMLFormatter + else: + c = HTMLFormatter + if isinstance(formatter, Callable): + return c(entity_substitution=formatter) + return c.REGISTRY[formatter] + + @property + def _is_xml(self): + """Is this element part of an XML tree or an HTML tree? + + This is used in formatter_for_name, when deciding whether an + XMLFormatter or HTMLFormatter is more appropriate. It can be + inefficient, but it should be called very rarely. + """ + if self.known_xml is not None: + # Most of the time we will have determined this when the + # document is parsed. + return self.known_xml + + # Otherwise, it's likely that this element was created by + # direct invocation of the constructor from within the user's + # Python code. + if self.parent is None: + # This is the top-level object. It should have .known_xml set + # from tree creation. If not, take a guess--BS is usually + # used on HTML markup. + return getattr(self, 'is_xml', False) + return self.parent._is_xml + + nextSibling = _alias("next_sibling") # BS3 + previousSibling = _alias("previous_sibling") # BS3 + + def replace_with(self, replace_with): + """Replace this PageElement with another one, keeping the rest of the + tree the same. + + :param replace_with: A PageElement. + :return: `self`, no longer part of the tree. + """ + if self.parent is None: + raise ValueError( + "Cannot replace one element with another when the " + "element to be replaced is not part of a tree.") + if replace_with is self: + return + if replace_with is self.parent: + raise ValueError("Cannot replace a Tag with its parent.") + old_parent = self.parent + my_index = self.parent.index(self) + self.extract(_self_index=my_index) + old_parent.insert(my_index, replace_with) + return self + replaceWith = replace_with # BS3 + + def unwrap(self): + """Replace this PageElement with its contents. + + :return: `self`, no longer part of the tree. + """ + my_parent = self.parent + if self.parent is None: + raise ValueError( + "Cannot replace an element with its contents when that" + "element is not part of a tree.") + my_index = self.parent.index(self) + self.extract(_self_index=my_index) + for child in reversed(self.contents[:]): + my_parent.insert(my_index, child) + return self + replace_with_children = unwrap + replaceWithChildren = unwrap # BS3 + + def wrap(self, wrap_inside): + """Wrap this PageElement inside another one. + + :param wrap_inside: A PageElement. + :return: `wrap_inside`, occupying the position in the tree that used + to be occupied by `self`, and with `self` inside it. + """ + me = self.replace_with(wrap_inside) + wrap_inside.append(me) + return wrap_inside + + def extract(self, _self_index=None): + """Destructively rips this element out of the tree. + + :param _self_index: The location of this element in its parent's + .contents, if known. Passing this in allows for a performance + optimization. + + :return: `self`, no longer part of the tree. + """ + if self.parent is not None: + if _self_index is None: + _self_index = self.parent.index(self) + del self.parent.contents[_self_index] + + #Find the two elements that would be next to each other if + #this element (and any children) hadn't been parsed. Connect + #the two. + last_child = self._last_descendant() + next_element = last_child.next_element + + if (self.previous_element is not None and + self.previous_element is not next_element): + self.previous_element.next_element = next_element + if next_element is not None and next_element is not self.previous_element: + next_element.previous_element = self.previous_element + self.previous_element = None + last_child.next_element = None + + self.parent = None + if (self.previous_sibling is not None + and self.previous_sibling is not self.next_sibling): + self.previous_sibling.next_sibling = self.next_sibling + if (self.next_sibling is not None + and self.next_sibling is not self.previous_sibling): + self.next_sibling.previous_sibling = self.previous_sibling + self.previous_sibling = self.next_sibling = None + return self + + def _last_descendant(self, is_initialized=True, accept_self=True): + """Finds the last element beneath this object to be parsed. + + :param is_initialized: Has `setup` been called on this PageElement + yet? + :param accept_self: Is `self` an acceptable answer to the question? + """ + if is_initialized and self.next_sibling is not None: + last_child = self.next_sibling.previous_element + else: + last_child = self + while isinstance(last_child, Tag) and last_child.contents: + last_child = last_child.contents[-1] + if not accept_self and last_child is self: + last_child = None + return last_child + # BS3: Not part of the API! + _lastRecursiveChild = _last_descendant + + def insert(self, position, new_child): + """Insert a new PageElement in the list of this PageElement's children. + + This works the same way as `list.insert`. + + :param position: The numeric position that should be occupied + in `self.children` by the new PageElement. + :param new_child: A PageElement. + """ + if new_child is None: + raise ValueError("Cannot insert None into a tag.") + if new_child is self: + raise ValueError("Cannot insert a tag into itself.") + if (isinstance(new_child, str) + and not isinstance(new_child, NavigableString)): + new_child = NavigableString(new_child) + + from bs4 import BeautifulSoup + if isinstance(new_child, BeautifulSoup): + # We don't want to end up with a situation where one BeautifulSoup + # object contains another. Insert the children one at a time. + for subchild in list(new_child.contents): + self.insert(position, subchild) + position += 1 + return + position = min(position, len(self.contents)) + if hasattr(new_child, 'parent') and new_child.parent is not None: + # We're 'inserting' an element that's already one + # of this object's children. + if new_child.parent is self: + current_index = self.index(new_child) + if current_index < position: + # We're moving this element further down the list + # of this object's children. That means that when + # we extract this element, our target index will + # jump down one. + position -= 1 + new_child.extract() + + new_child.parent = self + previous_child = None + if position == 0: + new_child.previous_sibling = None + new_child.previous_element = self + else: + previous_child = self.contents[position - 1] + new_child.previous_sibling = previous_child + new_child.previous_sibling.next_sibling = new_child + new_child.previous_element = previous_child._last_descendant(False) + if new_child.previous_element is not None: + new_child.previous_element.next_element = new_child + + new_childs_last_element = new_child._last_descendant(False) + + if position >= len(self.contents): + new_child.next_sibling = None + + parent = self + parents_next_sibling = None + while parents_next_sibling is None and parent is not None: + parents_next_sibling = parent.next_sibling + parent = parent.parent + if parents_next_sibling is not None: + # We found the element that comes next in the document. + break + if parents_next_sibling is not None: + new_childs_last_element.next_element = parents_next_sibling + else: + # The last element of this tag is the last element in + # the document. + new_childs_last_element.next_element = None + else: + next_child = self.contents[position] + new_child.next_sibling = next_child + if new_child.next_sibling is not None: + new_child.next_sibling.previous_sibling = new_child + new_childs_last_element.next_element = next_child + + if new_childs_last_element.next_element is not None: + new_childs_last_element.next_element.previous_element = new_childs_last_element + self.contents.insert(position, new_child) + + def append(self, tag): + """Appends the given PageElement to the contents of this one. + + :param tag: A PageElement. + """ + self.insert(len(self.contents), tag) + + def extend(self, tags): + """Appends the given PageElements to this one's contents. + + :param tags: A list of PageElements. + """ + for tag in tags: + self.append(tag) + + def insert_before(self, *args): + """Makes the given element(s) the immediate predecessor of this one. + + All the elements will have the same parent, and the given elements + will be immediately before this one. + + :param args: One or more PageElements. + """ + parent = self.parent + if parent is None: + raise ValueError( + "Element has no parent, so 'before' has no meaning.") + if any(x is self for x in args): + raise ValueError("Can't insert an element before itself.") + for predecessor in args: + # Extract first so that the index won't be screwed up if they + # are siblings. + if isinstance(predecessor, PageElement): + predecessor.extract() + index = parent.index(self) + parent.insert(index, predecessor) + + def insert_after(self, *args): + """Makes the given element(s) the immediate successor of this one. + + The elements will have the same parent, and the given elements + will be immediately after this one. + + :param args: One or more PageElements. + """ + # Do all error checking before modifying the tree. + parent = self.parent + if parent is None: + raise ValueError( + "Element has no parent, so 'after' has no meaning.") + if any(x is self for x in args): + raise ValueError("Can't insert an element after itself.") + + offset = 0 + for successor in args: + # Extract first so that the index won't be screwed up if they + # are siblings. + if isinstance(successor, PageElement): + successor.extract() + index = parent.index(self) + parent.insert(index+1+offset, successor) + offset += 1 + + def find_next(self, name=None, attrs={}, text=None, **kwargs): + """Find the first PageElement that matches the given criteria and + appears later in the document than this PageElement. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param text: A filter for a NavigableString with specific text. + :kwargs: A dictionary of filters on attribute values. + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + return self._find_one(self.find_all_next, name, attrs, text, **kwargs) + findNext = find_next # BS3 + + def find_all_next(self, name=None, attrs={}, text=None, limit=None, + **kwargs): + """Find all PageElements that match the given criteria and appear + later in the document than this PageElement. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param text: A filter for a NavigableString with specific text. + :param limit: Stop looking after finding this many results. + :kwargs: A dictionary of filters on attribute values. + :return: A ResultSet containing PageElements. + """ + return self._find_all(name, attrs, text, limit, self.next_elements, + **kwargs) + findAllNext = find_all_next # BS3 + + def find_next_sibling(self, name=None, attrs={}, text=None, **kwargs): + """Find the closest sibling to this PageElement that matches the + given criteria and appears later in the document. + + All find_* methods take a common set of arguments. See the + online documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param text: A filter for a NavigableString with specific text. + :kwargs: A dictionary of filters on attribute values. + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + return self._find_one(self.find_next_siblings, name, attrs, text, + **kwargs) + findNextSibling = find_next_sibling # BS3 + + def find_next_siblings(self, name=None, attrs={}, text=None, limit=None, + **kwargs): + """Find all siblings of this PageElement that match the given criteria + and appear later in the document. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param text: A filter for a NavigableString with specific text. + :param limit: Stop looking after finding this many results. + :kwargs: A dictionary of filters on attribute values. + :return: A ResultSet of PageElements. + :rtype: bs4.element.ResultSet + """ + return self._find_all(name, attrs, text, limit, + self.next_siblings, **kwargs) + findNextSiblings = find_next_siblings # BS3 + fetchNextSiblings = find_next_siblings # BS2 + + def find_previous(self, name=None, attrs={}, text=None, **kwargs): + """Look backwards in the document from this PageElement and find the + first PageElement that matches the given criteria. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param text: A filter for a NavigableString with specific text. + :kwargs: A dictionary of filters on attribute values. + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + return self._find_one( + self.find_all_previous, name, attrs, text, **kwargs) + findPrevious = find_previous # BS3 + + def find_all_previous(self, name=None, attrs={}, text=None, limit=None, + **kwargs): + """Look backwards in the document from this PageElement and find all + PageElements that match the given criteria. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param text: A filter for a NavigableString with specific text. + :param limit: Stop looking after finding this many results. + :kwargs: A dictionary of filters on attribute values. + :return: A ResultSet of PageElements. + :rtype: bs4.element.ResultSet + """ + return self._find_all(name, attrs, text, limit, self.previous_elements, + **kwargs) + findAllPrevious = find_all_previous # BS3 + fetchPrevious = find_all_previous # BS2 + + def find_previous_sibling(self, name=None, attrs={}, text=None, **kwargs): + """Returns the closest sibling to this PageElement that matches the + given criteria and appears earlier in the document. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param text: A filter for a NavigableString with specific text. + :kwargs: A dictionary of filters on attribute values. + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + return self._find_one(self.find_previous_siblings, name, attrs, text, + **kwargs) + findPreviousSibling = find_previous_sibling # BS3 + + def find_previous_siblings(self, name=None, attrs={}, text=None, + limit=None, **kwargs): + """Returns all siblings to this PageElement that match the + given criteria and appear earlier in the document. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param text: A filter for a NavigableString with specific text. + :param limit: Stop looking after finding this many results. + :kwargs: A dictionary of filters on attribute values. + :return: A ResultSet of PageElements. + :rtype: bs4.element.ResultSet + """ + return self._find_all(name, attrs, text, limit, + self.previous_siblings, **kwargs) + findPreviousSiblings = find_previous_siblings # BS3 + fetchPreviousSiblings = find_previous_siblings # BS2 + + def find_parent(self, name=None, attrs={}, **kwargs): + """Find the closest parent of this PageElement that matches the given + criteria. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :kwargs: A dictionary of filters on attribute values. + + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + # NOTE: We can't use _find_one because findParents takes a different + # set of arguments. + r = None + l = self.find_parents(name, attrs, 1, **kwargs) + if l: + r = l[0] + return r + findParent = find_parent # BS3 + + def find_parents(self, name=None, attrs={}, limit=None, **kwargs): + """Find all parents of this PageElement that match the given criteria. + + All find_* methods take a common set of arguments. See the online + documentation for detailed explanations. + + :param name: A filter on tag name. + :param attrs: A dictionary of filters on attribute values. + :param limit: Stop looking after finding this many results. + :kwargs: A dictionary of filters on attribute values. + + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + return self._find_all(name, attrs, None, limit, self.parents, + **kwargs) + findParents = find_parents # BS3 + fetchParents = find_parents # BS2 + + @property + def next(self): + """The PageElement, if any, that was parsed just after this one. + + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + return self.next_element + + @property + def previous(self): + """The PageElement, if any, that was parsed just before this one. + + :return: A PageElement. + :rtype: bs4.element.Tag | bs4.element.NavigableString + """ + return self.previous_element + + #These methods do the real heavy lifting. + + def _find_one(self, method, name, attrs, text, **kwargs): + r = None + l = method(name, attrs, text, 1, **kwargs) + if l: + r = l[0] + return r + + def _find_all(self, name, attrs, text, limit, generator, **kwargs): + "Iterates over a generator looking for things that match." + + if text is None and 'string' in kwargs: + text = kwargs['string'] + del kwargs['string'] + + if isinstance(name, SoupStrainer): + strainer = name + else: + strainer = SoupStrainer(name, attrs, text, **kwargs) + + if text is None and not limit and not attrs and not kwargs: + if name is True or name is None: + # Optimization to find all tags. + result = (element for element in generator + if isinstance(element, Tag)) + return ResultSet(strainer, result) + elif isinstance(name, str): + # Optimization to find all tags with a given name. + if name.count(':') == 1: + # This is a name with a prefix. If this is a namespace-aware document, + # we need to match the local name against tag.name. If not, + # we need to match the fully-qualified name against tag.name. + prefix, local_name = name.split(':', 1) + else: + prefix = None + local_name = name + result = (element for element in generator + if isinstance(element, Tag) + and ( + element.name == name + ) or ( + element.name == local_name + and (prefix is None or element.prefix == prefix) + ) + ) + return ResultSet(strainer, result) + results = ResultSet(strainer) + while True: + try: + i = next(generator) + except StopIteration: + break + if i: + found = strainer.search(i) + if found: + results.append(found) + if limit and len(results) >= limit: + break + return results + + #These generators can be used to navigate starting from both + #NavigableStrings and Tags. + @property + def next_elements(self): + """All PageElements that were parsed after this one. + + :yield: A sequence of PageElements. + """ + i = self.next_element + while i is not None: + yield i + i = i.next_element + + @property + def next_siblings(self): + """All PageElements that are siblings of this one but were parsed + later. + + :yield: A sequence of PageElements. + """ + i = self.next_sibling + while i is not None: + yield i + i = i.next_sibling + + @property + def previous_elements(self): + """All PageElements that were parsed before this one. + + :yield: A sequence of PageElements. + """ + i = self.previous_element + while i is not None: + yield i + i = i.previous_element + + @property + def previous_siblings(self): + """All PageElements that are siblings of this one but were parsed + earlier. + + :yield: A sequence of PageElements. + """ + i = self.previous_sibling + while i is not None: + yield i + i = i.previous_sibling + + @property + def parents(self): + """All PageElements that are parents of this PageElement. + + :yield: A sequence of PageElements. + """ + i = self.parent + while i is not None: + yield i + i = i.parent + + @property + def decomposed(self): + """Check whether a PageElement has been decomposed. + + :rtype: bool + """ + return getattr(self, '_decomposed', False) or False + + # Old non-property versions of the generators, for backwards + # compatibility with BS3. + def nextGenerator(self): + return self.next_elements + + def nextSiblingGenerator(self): + return self.next_siblings + + def previousGenerator(self): + return self.previous_elements + + def previousSiblingGenerator(self): + return self.previous_siblings + + def parentGenerator(self): + return self.parents + + +class NavigableString(str, PageElement): + """A Python Unicode string that is part of a parse tree. + + When Beautiful Soup parses the markup penguin, it will + create a NavigableString for the string "penguin". + """ + + PREFIX = '' + SUFFIX = '' + + # We can't tell just by looking at a string whether it's contained + # in an XML document or an HTML document. + + known_xml = None + + def __new__(cls, value): + """Create a new NavigableString. + + When unpickling a NavigableString, this method is called with + the string in DEFAULT_OUTPUT_ENCODING. That encoding needs to be + passed in to the superclass's __new__ or the superclass won't know + how to handle non-ASCII characters. + """ + if isinstance(value, str): + u = str.__new__(cls, value) + else: + u = str.__new__(cls, value, DEFAULT_OUTPUT_ENCODING) + u.setup() + return u + + def __copy__(self): + """A copy of a NavigableString has the same contents and class + as the original, but it is not connected to the parse tree. + """ + return type(self)(self) + + def __getnewargs__(self): + return (str(self),) + + def __getattr__(self, attr): + """text.string gives you text. This is for backwards + compatibility for Navigable*String, but for CData* it lets you + get the string without the CData wrapper.""" + if attr == 'string': + return self + else: + raise AttributeError( + "'%s' object has no attribute '%s'" % ( + self.__class__.__name__, attr)) + + def output_ready(self, formatter="minimal"): + """Run the string through the provided formatter. + + :param formatter: A Formatter object, or a string naming one of the standard formatters. + """ + output = self.format_string(self, formatter) + return self.PREFIX + output + self.SUFFIX + + @property + def name(self): + """Since a NavigableString is not a Tag, it has no .name. + + This property is implemented so that code like this doesn't crash + when run on a mixture of Tag and NavigableString objects: + [x.name for x in tag.children] + """ + return None + + @name.setter + def name(self, name): + """Prevent NavigableString.name from ever being set.""" + raise AttributeError("A NavigableString cannot be given a name.") + + +class PreformattedString(NavigableString): + """A NavigableString not subject to the normal formatting rules. + + This is an abstract class used for special kinds of strings such + as comments (the Comment class) and CDATA blocks (the CData + class). + """ + + PREFIX = '' + SUFFIX = '' + + def output_ready(self, formatter=None): + """Make this string ready for output by adding any subclass-specific + prefix or suffix. + + :param formatter: A Formatter object, or a string naming one + of the standard formatters. The string will be passed into the + Formatter, but only to trigger any side effects: the return + value is ignored. + + :return: The string, with any subclass-specific prefix and + suffix added on. + """ + if formatter is not None: + ignore = self.format_string(self, formatter) + return self.PREFIX + self + self.SUFFIX + +class CData(PreformattedString): + """A CDATA block.""" + PREFIX = '' + +class ProcessingInstruction(PreformattedString): + """A SGML processing instruction.""" + + PREFIX = '' + +class XMLProcessingInstruction(ProcessingInstruction): + """An XML processing instruction.""" + PREFIX = '' + +class Comment(PreformattedString): + """An HTML or XML comment.""" + PREFIX = '' + + +class Declaration(PreformattedString): + """An XML declaration.""" + PREFIX = '' + + +class Doctype(PreformattedString): + """A document type declaration.""" + @classmethod + def for_name_and_ids(cls, name, pub_id, system_id): + """Generate an appropriate document type declaration for a given + public ID and system ID. + + :param name: The name of the document's root element, e.g. 'html'. + :param pub_id: The Formal Public Identifier for this document type, + e.g. '-//W3C//DTD XHTML 1.1//EN' + :param system_id: The system identifier for this document type, + e.g. 'http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd' + + :return: A Doctype. + """ + value = name or '' + if pub_id is not None: + value += ' PUBLIC "%s"' % pub_id + if system_id is not None: + value += ' "%s"' % system_id + elif system_id is not None: + value += ' SYSTEM "%s"' % system_id + + return Doctype(value) + + PREFIX = '\n' + + +class Stylesheet(NavigableString): + """A NavigableString representing an stylesheet (probably + CSS). + + Used to distinguish embedded stylesheets from textual content. + """ + pass + + +class Script(NavigableString): + """A NavigableString representing an executable script (probably + Javascript). + + Used to distinguish executable code from textual content. + """ + pass + + +class TemplateString(NavigableString): + """A NavigableString representing a string found inside an HTML + template embedded in a larger document. + + Used to distinguish such strings from the main body of the document. + """ + pass + + +class Tag(PageElement): + """Represents an HTML or XML tag that is part of a parse tree, along + with its attributes and contents. + + When Beautiful Soup parses the markup penguin, it will + create a Tag object representing the tag. + """ + + def __init__(self, parser=None, builder=None, name=None, namespace=None, + prefix=None, attrs=None, parent=None, previous=None, + is_xml=None, sourceline=None, sourcepos=None, + can_be_empty_element=None, cdata_list_attributes=None, + preserve_whitespace_tags=None + ): + """Basic constructor. + + :param parser: A BeautifulSoup object. + :param builder: A TreeBuilder. + :param name: The name of the tag. + :param namespace: The URI of this Tag's XML namespace, if any. + :param prefix: The prefix for this Tag's XML namespace, if any. + :param attrs: A dictionary of this Tag's attribute values. + :param parent: The PageElement to use as this Tag's parent. + :param previous: The PageElement that was parsed immediately before + this tag. + :param is_xml: If True, this is an XML tag. Otherwise, this is an + HTML tag. + :param sourceline: The line number where this tag was found in its + source document. + :param sourcepos: The character position within `sourceline` where this + tag was found. + :param can_be_empty_element: If True, this tag should be + represented as . If False, this tag should be represented + as . + :param cdata_list_attributes: A list of attributes whose values should + be treated as CDATA if they ever show up on this tag. + :param preserve_whitespace_tags: A list of tag names whose contents + should have their whitespace preserved. + """ + if parser is None: + self.parser_class = None + else: + # We don't actually store the parser object: that lets extracted + # chunks be garbage-collected. + self.parser_class = parser.__class__ + if name is None: + raise ValueError("No value provided for new tag's name.") + self.name = name + self.namespace = namespace + self.prefix = prefix + if ((not builder or builder.store_line_numbers) + and (sourceline is not None or sourcepos is not None)): + self.sourceline = sourceline + self.sourcepos = sourcepos + if attrs is None: + attrs = {} + elif attrs: + if builder is not None and builder.cdata_list_attributes: + attrs = builder._replace_cdata_list_attribute_values( + self.name, attrs) + else: + attrs = dict(attrs) + else: + attrs = dict(attrs) + + # If possible, determine ahead of time whether this tag is an + # XML tag. + if builder: + self.known_xml = builder.is_xml + else: + self.known_xml = is_xml + self.attrs = attrs + self.contents = [] + self.setup(parent, previous) + self.hidden = False + + if builder is None: + # In the absence of a TreeBuilder, use whatever values were + # passed in here. They're probably None, unless this is a copy of some + # other tag. + self.can_be_empty_element = can_be_empty_element + self.cdata_list_attributes = cdata_list_attributes + self.preserve_whitespace_tags = preserve_whitespace_tags + else: + # Set up any substitutions for this tag, such as the charset in a META tag. + builder.set_up_substitutions(self) + + # Ask the TreeBuilder whether this tag might be an empty-element tag. + self.can_be_empty_element = builder.can_be_empty_element(name) + + # Keep track of the list of attributes of this tag that + # might need to be treated as a list. + # + # For performance reasons, we store the whole data structure + # rather than asking the question of every tag. Asking would + # require building a new data structure every time, and + # (unlike can_be_empty_element), we almost never need + # to check this. + self.cdata_list_attributes = builder.cdata_list_attributes + + # Keep track of the names that might cause this tag to be treated as a + # whitespace-preserved tag. + self.preserve_whitespace_tags = builder.preserve_whitespace_tags + + parserClass = _alias("parser_class") # BS3 + + def __copy__(self): + """A copy of a Tag is a new Tag, unconnected to the parse tree. + Its contents are a copy of the old Tag's contents. + """ + clone = type(self)( + None, self.builder, self.name, self.namespace, + self.prefix, self.attrs, is_xml=self._is_xml, + sourceline=self.sourceline, sourcepos=self.sourcepos, + can_be_empty_element=self.can_be_empty_element, + cdata_list_attributes=self.cdata_list_attributes, + preserve_whitespace_tags=self.preserve_whitespace_tags + ) + for attr in ('can_be_empty_element', 'hidden'): + setattr(clone, attr, getattr(self, attr)) + for child in self.contents: + clone.append(child.__copy__()) + return clone + + @property + def is_empty_element(self): + """Is this tag an empty-element tag? (aka a self-closing tag) + + A tag that has contents is never an empty-element tag. + + A tag that has no contents may or may not be an empty-element + tag. It depends on the builder used to create the tag. If the + builder has a designated list of empty-element tags, then only + a tag whose name shows up in that list is considered an + empty-element tag. + + If the builder has no designated list of empty-element tags, + then any tag with no contents is an empty-element tag. + """ + return len(self.contents) == 0 and self.can_be_empty_element + isSelfClosing = is_empty_element # BS3 + + @property + def string(self): + """Convenience property to get the single string within this + PageElement. + + TODO It might make sense to have NavigableString.string return + itself. + + :return: If this element has a single string child, return + value is that string. If this element has one child tag, + return value is the 'string' attribute of the child tag, + recursively. If this element is itself a string, has no + children, or has more than one child, return value is None. + """ + if len(self.contents) != 1: + return None + child = self.contents[0] + if isinstance(child, NavigableString): + return child + return child.string + + @string.setter + def string(self, string): + """Replace this PageElement's contents with `string`.""" + self.clear() + self.append(string.__class__(string)) + + def _all_strings(self, strip=False, types=(NavigableString, CData)): + """Yield all strings of certain classes, possibly stripping them. + + :param strip: If True, all strings will be stripped before being + yielded. + + :types: A tuple of NavigableString subclasses. Any strings of + a subclass not found in this list will be ignored. By + default, this means only NavigableString and CData objects + will be considered. So no comments, processing instructions, + etc. + + :yield: A sequence of strings. + """ + for descendant in self.descendants: + if ( + (types is None and not isinstance(descendant, NavigableString)) + or + (types is not None and type(descendant) not in types)): + continue + if strip: + descendant = descendant.strip() + if len(descendant) == 0: + continue + yield descendant + + strings = property(_all_strings) + + @property + def stripped_strings(self): + """Yield all strings in the document, stripping them first. + + :yield: A sequence of stripped strings. + """ + for string in self._all_strings(True): + yield string + + def get_text(self, separator="", strip=False, + types=(NavigableString, CData)): + """Get all child strings, concatenated using the given separator. + + :param separator: Strings will be concatenated using this separator. + + :param strip: If True, strings will be stripped before being + concatenated. + + :types: A tuple of NavigableString subclasses. Any strings of + a subclass not found in this list will be ignored. By + default, this means only NavigableString and CData objects + will be considered. So no comments, processing instructions, + stylesheets, etc. + + :return: A string. + """ + return separator.join([s for s in self._all_strings( + strip, types=types)]) + getText = get_text + text = property(get_text) + + def decompose(self): + """Recursively destroys this PageElement and its children. + + This element will be removed from the tree and wiped out; so + will everything beneath it. + + The behavior of a decomposed PageElement is undefined and you + should never use one for anything, but if you need to _check_ + whether an element has been decomposed, you can use the + `decomposed` property. + """ + self.extract() + i = self + while i is not None: + n = i.next_element + i.__dict__.clear() + i.contents = [] + i._decomposed = True + i = n + + def clear(self, decompose=False): + """Wipe out all children of this PageElement by calling extract() + on them. + + :param decompose: If this is True, decompose() (a more + destructive method) will be called instead of extract(). + """ + if decompose: + for element in self.contents[:]: + if isinstance(element, Tag): + element.decompose() + else: + element.extract() + else: + for element in self.contents[:]: + element.extract() + + def smooth(self): + """Smooth out this element's children by consolidating consecutive + strings. + + This makes pretty-printed output look more natural following a + lot of operations that modified the tree. + """ + # Mark the first position of every pair of children that need + # to be consolidated. Do this rather than making a copy of + # self.contents, since in most cases very few strings will be + # affected. + marked = [] + for i, a in enumerate(self.contents): + if isinstance(a, Tag): + # Recursively smooth children. + a.smooth() + if i == len(self.contents)-1: + # This is the last item in .contents, and it's not a + # tag. There's no chance it needs any work. + continue + b = self.contents[i+1] + if (isinstance(a, NavigableString) + and isinstance(b, NavigableString) + and not isinstance(a, PreformattedString) + and not isinstance(b, PreformattedString) + ): + marked.append(i) + + # Go over the marked positions in reverse order, so that + # removing items from .contents won't affect the remaining + # positions. + for i in reversed(marked): + a = self.contents[i] + b = self.contents[i+1] + b.extract() + n = NavigableString(a+b) + a.replace_with(n) + + def index(self, element): + """Find the index of a child by identity, not value. + + Avoids issues with tag.contents.index(element) getting the + index of equal elements. + + :param element: Look for this PageElement in `self.contents`. + """ + for i, child in enumerate(self.contents): + if child is element: + return i + raise ValueError("Tag.index: element not in tag") + + def get(self, key, default=None): + """Returns the value of the 'key' attribute for the tag, or + the value given for 'default' if it doesn't have that + attribute.""" + return self.attrs.get(key, default) + + def get_attribute_list(self, key, default=None): + """The same as get(), but always returns a list. + + :param key: The attribute to look for. + :param default: Use this value if the attribute is not present + on this PageElement. + :return: A list of values, probably containing only a single + value. + """ + value = self.get(key, default) + if not isinstance(value, list): + value = [value] + return value + + def has_attr(self, key): + """Does this PageElement have an attribute with the given name?""" + return key in self.attrs + + def __hash__(self): + return str(self).__hash__() + + def __getitem__(self, key): + """tag[key] returns the value of the 'key' attribute for the Tag, + and throws an exception if it's not there.""" + return self.attrs[key] + + def __iter__(self): + "Iterating over a Tag iterates over its contents." + return iter(self.contents) + + def __len__(self): + "The length of a Tag is the length of its list of contents." + return len(self.contents) + + def __contains__(self, x): + return x in self.contents + + def __bool__(self): + "A tag is non-None even if it has no contents." + return True + + def __setitem__(self, key, value): + """Setting tag[key] sets the value of the 'key' attribute for the + tag.""" + self.attrs[key] = value + + def __delitem__(self, key): + "Deleting tag[key] deletes all 'key' attributes for the tag." + self.attrs.pop(key, None) + + def __call__(self, *args, **kwargs): + """Calling a Tag like a function is the same as calling its + find_all() method. Eg. tag('a') returns a list of all the A tags + found within this tag.""" + return self.find_all(*args, **kwargs) + + def __getattr__(self, tag): + """Calling tag.subtag is the same as calling tag.find(name="subtag")""" + #print("Getattr %s.%s" % (self.__class__, tag)) + if len(tag) > 3 and tag.endswith('Tag'): + # BS3: soup.aTag -> "soup.find("a") + tag_name = tag[:-3] + warnings.warn( + '.%(name)sTag is deprecated, use .find("%(name)s") instead. If you really were looking for a tag called %(name)sTag, use .find("%(name)sTag")' % dict( + name=tag_name + ) + ) + return self.find(tag_name) + # We special case contents to avoid recursion. + elif not tag.startswith("__") and not tag == "contents": + return self.find(tag) + raise AttributeError( + "'%s' object has no attribute '%s'" % (self.__class__, tag)) + + def __eq__(self, other): + """Returns true iff this Tag has the same name, the same attributes, + and the same contents (recursively) as `other`.""" + if self is other: + return True + if (not hasattr(other, 'name') or + not hasattr(other, 'attrs') or + not hasattr(other, 'contents') or + self.name != other.name or + self.attrs != other.attrs or + len(self) != len(other)): + return False + for i, my_child in enumerate(self.contents): + if my_child != other.contents[i]: + return False + return True + + def __ne__(self, other): + """Returns true iff this Tag is not identical to `other`, + as defined in __eq__.""" + return not self == other + + def __repr__(self, encoding="unicode-escape"): + """Renders this PageElement as a string. + + :param encoding: The encoding to use (Python 2 only). + :return: Under Python 2, a bytestring; under Python 3, + a Unicode string. + """ + if PY3K: + # "The return value must be a string object", i.e. Unicode + return self.decode() + else: + # "The return value must be a string object", i.e. a bytestring. + # By convention, the return value of __repr__ should also be + # an ASCII string. + return self.encode(encoding) + + def __unicode__(self): + """Renders this PageElement as a Unicode string.""" + return self.decode() + + def __str__(self): + """Renders this PageElement as a generic string. + + :return: Under Python 2, a UTF-8 bytestring; under Python 3, + a Unicode string. + """ + if PY3K: + return self.decode() + else: + return self.encode() + + if PY3K: + __str__ = __repr__ = __unicode__ + + def encode(self, encoding=DEFAULT_OUTPUT_ENCODING, + indent_level=None, formatter="minimal", + errors="xmlcharrefreplace"): + """Render a bytestring representation of this PageElement and its + contents. + + :param encoding: The destination encoding. + :param indent_level: Each line of the rendering will be + indented this many spaces. Used internally in + recursive calls while pretty-printing. + :param formatter: A Formatter object, or a string naming one of + the standard formatters. + :param errors: An error handling strategy such as + 'xmlcharrefreplace'. This value is passed along into + encode() and its value should be one of the constants + defined by Python. + :return: A bytestring. + + """ + # Turn the data structure into Unicode, then encode the + # Unicode. + u = self.decode(indent_level, encoding, formatter) + return u.encode(encoding, errors) + + def decode(self, indent_level=None, + eventual_encoding=DEFAULT_OUTPUT_ENCODING, + formatter="minimal"): + """Render a Unicode representation of this PageElement and its + contents. + + :param indent_level: Each line of the rendering will be + indented this many spaces. Used internally in + recursive calls while pretty-printing. + :param eventual_encoding: The tag is destined to be + encoded into this encoding. This method is _not_ + responsible for performing that encoding. This information + is passed in so that it can be substituted in if the + document contains a tag that mentions the document's + encoding. + :param formatter: A Formatter object, or a string naming one of + the standard formatters. + """ + + # First off, turn a non-Formatter `formatter` into a Formatter + # object. This will stop the lookup from happening over and + # over again. + if not isinstance(formatter, Formatter): + formatter = self.formatter_for_name(formatter) + attributes = formatter.attributes(self) + attrs = [] + for key, val in attributes: + if val is None: + decoded = key + else: + if isinstance(val, list) or isinstance(val, tuple): + val = ' '.join(val) + elif not isinstance(val, str): + val = str(val) + elif ( + isinstance(val, AttributeValueWithCharsetSubstitution) + and eventual_encoding is not None + ): + val = val.encode(eventual_encoding) + + text = formatter.attribute_value(val) + decoded = ( + str(key) + '=' + + formatter.quoted_attribute_value(text)) + attrs.append(decoded) + close = '' + closeTag = '' + + prefix = '' + if self.prefix: + prefix = self.prefix + ":" + + if self.is_empty_element: + close = formatter.void_element_close_prefix or '' + else: + closeTag = '' % (prefix, self.name) + + pretty_print = self._should_pretty_print(indent_level) + space = '' + indent_space = '' + if indent_level is not None: + indent_space = (' ' * (indent_level - 1)) + if pretty_print: + space = indent_space + indent_contents = indent_level + 1 + else: + indent_contents = None + contents = self.decode_contents( + indent_contents, eventual_encoding, formatter + ) + + if self.hidden: + # This is the 'document root' object. + s = contents + else: + s = [] + attribute_string = '' + if attrs: + attribute_string = ' ' + ' '.join(attrs) + if indent_level is not None: + # Even if this particular tag is not pretty-printed, + # we should indent up to the start of the tag. + s.append(indent_space) + s.append('<%s%s%s%s>' % ( + prefix, self.name, attribute_string, close)) + if pretty_print: + s.append("\n") + s.append(contents) + if pretty_print and contents and contents[-1] != "\n": + s.append("\n") + if pretty_print and closeTag: + s.append(space) + s.append(closeTag) + if indent_level is not None and closeTag and self.next_sibling: + # Even if this particular tag is not pretty-printed, + # we're now done with the tag, and we should add a + # newline if appropriate. + s.append("\n") + s = ''.join(s) + return s + + def _should_pretty_print(self, indent_level): + """Should this tag be pretty-printed? + + Most of them should, but some (such as
 in HTML
+        documents) should not.
+        """
+        return (
+            indent_level is not None
+            and (
+                not self.preserve_whitespace_tags
+                or self.name not in self.preserve_whitespace_tags
+            )
+        )
+
+    def prettify(self, encoding=None, formatter="minimal"):
+        """Pretty-print this PageElement as a string.
+
+        :param encoding: The eventual encoding of the string. If this is None,
+            a Unicode string will be returned.
+        :param formatter: A Formatter object, or a string naming one of
+            the standard formatters.
+        :return: A Unicode string (if encoding==None) or a bytestring 
+            (otherwise).
+        """
+        if encoding is None:
+            return self.decode(True, formatter=formatter)
+        else:
+            return self.encode(encoding, True, formatter=formatter)
+
+    def decode_contents(self, indent_level=None,
+                       eventual_encoding=DEFAULT_OUTPUT_ENCODING,
+                       formatter="minimal"):
+        """Renders the contents of this tag as a Unicode string.
+
+        :param indent_level: Each line of the rendering will be
+           indented this many spaces. Used internally in
+           recursive calls while pretty-printing.
+
+        :param eventual_encoding: The tag is destined to be
+           encoded into this encoding. decode_contents() is _not_
+           responsible for performing that encoding. This information
+           is passed in so that it can be substituted in if the
+           document contains a  tag that mentions the document's
+           encoding.
+
+        :param formatter: A Formatter object, or a string naming one of
+            the standard Formatters.
+        """
+        # First off, turn a string formatter into a Formatter object. This
+        # will stop the lookup from happening over and over again.
+        if not isinstance(formatter, Formatter):
+            formatter = self.formatter_for_name(formatter)
+
+        pretty_print = (indent_level is not None)
+        s = []
+        for c in self:
+            text = None
+            if isinstance(c, NavigableString):
+                text = c.output_ready(formatter)
+            elif isinstance(c, Tag):
+                s.append(c.decode(indent_level, eventual_encoding,
+                                  formatter))
+            preserve_whitespace = (
+                self.preserve_whitespace_tags and self.name in self.preserve_whitespace_tags
+            )
+            if text and indent_level and not preserve_whitespace:
+                text = text.strip()
+            if text:
+                if pretty_print and not preserve_whitespace:
+                    s.append(" " * (indent_level - 1))
+                s.append(text)
+                if pretty_print and not preserve_whitespace:
+                    s.append("\n")
+        return ''.join(s)
+       
+    def encode_contents(
+        self, indent_level=None, encoding=DEFAULT_OUTPUT_ENCODING,
+        formatter="minimal"):
+        """Renders the contents of this PageElement as a bytestring.
+
+        :param indent_level: Each line of the rendering will be
+           indented this many spaces. Used internally in
+           recursive calls while pretty-printing.
+
+        :param eventual_encoding: The bytestring will be in this encoding.
+
+        :param formatter: A Formatter object, or a string naming one of
+            the standard Formatters.
+
+        :return: A bytestring.
+        """
+        contents = self.decode_contents(indent_level, encoding, formatter)
+        return contents.encode(encoding)
+
+    # Old method for BS3 compatibility
+    def renderContents(self, encoding=DEFAULT_OUTPUT_ENCODING,
+                       prettyPrint=False, indentLevel=0):
+        """Deprecated method for BS3 compatibility."""
+        if not prettyPrint:
+            indentLevel = None
+        return self.encode_contents(
+            indent_level=indentLevel, encoding=encoding)
+
+    #Soup methods
+
+    def find(self, name=None, attrs={}, recursive=True, text=None,
+             **kwargs):
+        """Look in the children of this PageElement and find the first
+        PageElement that matches the given criteria.
+
+        All find_* methods take a common set of arguments. See the online
+        documentation for detailed explanations.
+
+        :param name: A filter on tag name.
+        :param attrs: A dictionary of filters on attribute values.
+        :param recursive: If this is True, find() will perform a
+            recursive search of this PageElement's children. Otherwise,
+            only the direct children will be considered.
+        :param limit: Stop looking after finding this many results.
+        :kwargs: A dictionary of filters on attribute values.
+        :return: A PageElement.
+        :rtype: bs4.element.Tag | bs4.element.NavigableString
+        """
+        r = None
+        l = self.find_all(name, attrs, recursive, text, 1, **kwargs)
+        if l:
+            r = l[0]
+        return r
+    findChild = find #BS2
+
+    def find_all(self, name=None, attrs={}, recursive=True, text=None,
+                 limit=None, **kwargs):
+        """Look in the children of this PageElement and find all
+        PageElements that match the given criteria.
+
+        All find_* methods take a common set of arguments. See the online
+        documentation for detailed explanations.
+
+        :param name: A filter on tag name.
+        :param attrs: A dictionary of filters on attribute values.
+        :param recursive: If this is True, find_all() will perform a
+            recursive search of this PageElement's children. Otherwise,
+            only the direct children will be considered.
+        :param limit: Stop looking after finding this many results.
+        :kwargs: A dictionary of filters on attribute values.
+        :return: A ResultSet of PageElements.
+        :rtype: bs4.element.ResultSet
+        """
+        generator = self.descendants
+        if not recursive:
+            generator = self.children
+        return self._find_all(name, attrs, text, limit, generator, **kwargs)
+    findAll = find_all       # BS3
+    findChildren = find_all  # BS2
+
+    #Generator methods
+    @property
+    def children(self):
+        """Iterate over all direct children of this PageElement.
+
+        :yield: A sequence of PageElements.
+        """
+        # return iter() to make the purpose of the method clear
+        return iter(self.contents)  # XXX This seems to be untested.
+
+    @property
+    def descendants(self):
+        """Iterate over all children of this PageElement in a
+        breadth-first sequence.
+
+        :yield: A sequence of PageElements.
+        """
+        if not len(self.contents):
+            return
+        stopNode = self._last_descendant().next_element
+        current = self.contents[0]
+        while current is not stopNode:
+            yield current
+            current = current.next_element
+
+    # CSS selector code
+    def select_one(self, selector, namespaces=None, **kwargs):
+        """Perform a CSS selection operation on the current element.
+
+        :param selector: A CSS selector.
+
+        :param namespaces: A dictionary mapping namespace prefixes
+           used in the CSS selector to namespace URIs. By default,
+           Beautiful Soup will use the prefixes it encountered while
+           parsing the document.
+
+        :param kwargs: Keyword arguments to be passed into SoupSieve's 
+           soupsieve.select() method.
+
+        :return: A Tag.
+        :rtype: bs4.element.Tag
+        """
+        value = self.select(selector, namespaces, 1, **kwargs)
+        if value:
+            return value[0]
+        return None
+
+    def select(self, selector, namespaces=None, limit=None, **kwargs):
+        """Perform a CSS selection operation on the current element.
+
+        This uses the SoupSieve library.
+
+        :param selector: A string containing a CSS selector.
+
+        :param namespaces: A dictionary mapping namespace prefixes
+           used in the CSS selector to namespace URIs. By default,
+           Beautiful Soup will use the prefixes it encountered while
+           parsing the document.
+
+        :param limit: After finding this number of results, stop looking.
+
+        :param kwargs: Keyword arguments to be passed into SoupSieve's 
+           soupsieve.select() method.
+
+        :return: A ResultSet of Tags.
+        :rtype: bs4.element.ResultSet
+        """
+        if namespaces is None:
+            namespaces = self._namespaces
+        
+        if limit is None:
+            limit = 0
+        if soupsieve is None:
+            raise NotImplementedError(
+                "Cannot execute CSS selectors because the soupsieve package is not installed."
+            )
+            
+        results = soupsieve.select(selector, self, namespaces, limit, **kwargs)
+
+        # We do this because it's more consistent and because
+        # ResultSet.__getattr__ has a helpful error message.
+        return ResultSet(None, results)
+
+    # Old names for backwards compatibility
+    def childGenerator(self):
+        """Deprecated generator."""
+        return self.children
+
+    def recursiveChildGenerator(self):
+        """Deprecated generator."""
+        return self.descendants
+
+    def has_key(self, key):
+        """Deprecated method. This was kind of misleading because has_key()
+        (attributes) was different from __in__ (contents).
+
+        has_key() is gone in Python 3, anyway.
+        """
+        warnings.warn('has_key is deprecated. Use has_attr("%s") instead.' % (
+                key))
+        return self.has_attr(key)
+
+# Next, a couple classes to represent queries and their results.
+class SoupStrainer(object):
+    """Encapsulates a number of ways of matching a markup element (tag or
+    string).
+
+    This is primarily used to underpin the find_* methods, but you can
+    create one yourself and pass it in as `parse_only` to the
+    `BeautifulSoup` constructor, to parse a subset of a large
+    document.
+    """
+
+    def __init__(self, name=None, attrs={}, text=None, **kwargs):
+        """Constructor.
+
+        The SoupStrainer constructor takes the same arguments passed
+        into the find_* methods. See the online documentation for
+        detailed explanations.
+
+        :param name: A filter on tag name.
+        :param attrs: A dictionary of filters on attribute values.
+        :param text: A filter for a NavigableString with specific text.
+        :kwargs: A dictionary of filters on attribute values.
+        """        
+        self.name = self._normalize_search_value(name)
+        if not isinstance(attrs, dict):
+            # Treat a non-dict value for attrs as a search for the 'class'
+            # attribute.
+            kwargs['class'] = attrs
+            attrs = None
+
+        if 'class_' in kwargs:
+            # Treat class_="foo" as a search for the 'class'
+            # attribute, overriding any non-dict value for attrs.
+            kwargs['class'] = kwargs['class_']
+            del kwargs['class_']
+
+        if kwargs:
+            if attrs:
+                attrs = attrs.copy()
+                attrs.update(kwargs)
+            else:
+                attrs = kwargs
+        normalized_attrs = {}
+        for key, value in list(attrs.items()):
+            normalized_attrs[key] = self._normalize_search_value(value)
+
+        self.attrs = normalized_attrs
+        self.text = self._normalize_search_value(text)
+
+    def _normalize_search_value(self, value):
+        # Leave it alone if it's a Unicode string, a callable, a
+        # regular expression, a boolean, or None.
+        if (isinstance(value, str) or isinstance(value, Callable) or hasattr(value, 'match')
+            or isinstance(value, bool) or value is None):
+            return value
+
+        # If it's a bytestring, convert it to Unicode, treating it as UTF-8.
+        if isinstance(value, bytes):
+            return value.decode("utf8")
+
+        # If it's listlike, convert it into a list of strings.
+        if hasattr(value, '__iter__'):
+            new_value = []
+            for v in value:
+                if (hasattr(v, '__iter__') and not isinstance(v, bytes)
+                    and not isinstance(v, str)):
+                    # This is almost certainly the user's mistake. In the
+                    # interests of avoiding infinite loops, we'll let
+                    # it through as-is rather than doing a recursive call.
+                    new_value.append(v)
+                else:
+                    new_value.append(self._normalize_search_value(v))
+            return new_value
+
+        # Otherwise, convert it into a Unicode string.
+        # The unicode(str()) thing is so this will do the same thing on Python 2
+        # and Python 3.
+        return str(str(value))
+
+    def __str__(self):
+        """A human-readable representation of this SoupStrainer."""
+        if self.text:
+            return self.text
+        else:
+            return "%s|%s" % (self.name, self.attrs)
+
+    def search_tag(self, markup_name=None, markup_attrs={}):
+        """Check whether a Tag with the given name and attributes would
+        match this SoupStrainer.
+
+        Used prospectively to decide whether to even bother creating a Tag
+        object.
+
+        :param markup_name: A tag name as found in some markup.
+        :param markup_attrs: A dictionary of attributes as found in some markup.
+
+        :return: True if the prospective tag would match this SoupStrainer;
+            False otherwise.
+        """
+        found = None
+        markup = None
+        if isinstance(markup_name, Tag):
+            markup = markup_name
+            markup_attrs = markup
+        call_function_with_tag_data = (
+            isinstance(self.name, Callable)
+            and not isinstance(markup_name, Tag))
+
+        if ((not self.name)
+            or call_function_with_tag_data
+            or (markup and self._matches(markup, self.name))
+            or (not markup and self._matches(markup_name, self.name))):
+            if call_function_with_tag_data:
+                match = self.name(markup_name, markup_attrs)
+            else:
+                match = True
+                markup_attr_map = None
+                for attr, match_against in list(self.attrs.items()):
+                    if not markup_attr_map:
+                        if hasattr(markup_attrs, 'get'):
+                            markup_attr_map = markup_attrs
+                        else:
+                            markup_attr_map = {}
+                            for k, v in markup_attrs:
+                                markup_attr_map[k] = v
+                    attr_value = markup_attr_map.get(attr)
+                    if not self._matches(attr_value, match_against):
+                        match = False
+                        break
+            if match:
+                if markup:
+                    found = markup
+                else:
+                    found = markup_name
+        if found and self.text and not self._matches(found.string, self.text):
+            found = None
+        return found
+
+    # For BS3 compatibility.
+    searchTag = search_tag
+
+    def search(self, markup):
+        """Find all items in `markup` that match this SoupStrainer.
+
+        Used by the core _find_all() method, which is ultimately
+        called by all find_* methods.
+
+        :param markup: A PageElement or a list of them.
+        """
+        # print('looking for %s in %s' % (self, markup))
+        found = None
+        # If given a list of items, scan it for a text element that
+        # matches.
+        if hasattr(markup, '__iter__') and not isinstance(markup, (Tag, str)):
+            for element in markup:
+                if isinstance(element, NavigableString) \
+                       and self.search(element):
+                    found = element
+                    break
+        # If it's a Tag, make sure its name or attributes match.
+        # Don't bother with Tags if we're searching for text.
+        elif isinstance(markup, Tag):
+            if not self.text or self.name or self.attrs:
+                found = self.search_tag(markup)
+        # If it's text, make sure the text matches.
+        elif isinstance(markup, NavigableString) or \
+                 isinstance(markup, str):
+            if not self.name and not self.attrs and self._matches(markup, self.text):
+                found = markup
+        else:
+            raise Exception(
+                "I don't know how to match against a %s" % markup.__class__)
+        return found
+
+    def _matches(self, markup, match_against, already_tried=None):
+        # print(u"Matching %s against %s" % (markup, match_against))
+        result = False
+        if isinstance(markup, list) or isinstance(markup, tuple):
+            # This should only happen when searching a multi-valued attribute
+            # like 'class'.
+            for item in markup:
+                if self._matches(item, match_against):
+                    return True
+            # We didn't match any particular value of the multivalue
+            # attribute, but maybe we match the attribute value when
+            # considered as a string.
+            if self._matches(' '.join(markup), match_against):
+                return True
+            return False
+        
+        if match_against is True:
+            # True matches any non-None value.
+            return markup is not None
+
+        if isinstance(match_against, Callable):
+            return match_against(markup)
+
+        # Custom callables take the tag as an argument, but all
+        # other ways of matching match the tag name as a string.
+        original_markup = markup
+        if isinstance(markup, Tag):
+            markup = markup.name
+
+        # Ensure that `markup` is either a Unicode string, or None.
+        markup = self._normalize_search_value(markup)
+
+        if markup is None:
+            # None matches None, False, an empty string, an empty list, and so on.
+            return not match_against
+
+        if (hasattr(match_against, '__iter__')
+            and not isinstance(match_against, str)):
+            # We're asked to match against an iterable of items.
+            # The markup must be match at least one item in the
+            # iterable. We'll try each one in turn.
+            #
+            # To avoid infinite recursion we need to keep track of
+            # items we've already seen.
+            if not already_tried:
+                already_tried = set()
+            for item in match_against:
+                if item.__hash__:
+                    key = item
+                else:
+                    key = id(item)
+                if key in already_tried:
+                    continue
+                else:
+                    already_tried.add(key)
+                    if self._matches(original_markup, item, already_tried):
+                        return True
+            else:
+                return False
+        
+        # Beyond this point we might need to run the test twice: once against
+        # the tag's name and once against its prefixed name.
+        match = False
+        
+        if not match and isinstance(match_against, str):
+            # Exact string match
+            match = markup == match_against
+
+        if not match and hasattr(match_against, 'search'):
+            # Regexp match
+            return match_against.search(markup)
+
+        if (not match
+            and isinstance(original_markup, Tag)
+            and original_markup.prefix):
+            # Try the whole thing again with the prefixed tag name.
+            return self._matches(
+                original_markup.prefix + ':' + original_markup.name, match_against
+            )
+
+        return match
+
+
+class ResultSet(list):
+    """A ResultSet is just a list that keeps track of the SoupStrainer
+    that created it."""
+    def __init__(self, source, result=()):
+        """Constructor.
+
+        :param source: A SoupStrainer.
+        :param result: A list of PageElements.
+        """
+        super(ResultSet, self).__init__(result)
+        self.source = source
+
+    def __getattr__(self, key):
+        """Raise a helpful exception to explain a common code fix."""
+        raise AttributeError(
+            "ResultSet object has no attribute '%s'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()?" % key
+        )
diff --git a/venv/Lib/site-packages/bs4/formatter.py b/venv/Lib/site-packages/bs4/formatter.py
new file mode 100644
index 0000000..2cbab4c
--- /dev/null
+++ b/venv/Lib/site-packages/bs4/formatter.py
@@ -0,0 +1,152 @@
+from bs4.dammit import EntitySubstitution
+
+class Formatter(EntitySubstitution):
+    """Describes a strategy to use when outputting a parse tree to a string.
+
+    Some parts of this strategy come from the distinction between
+    HTML4, HTML5, and XML. Others are configurable by the user.
+
+    Formatters are passed in as the `formatter` argument to methods
+    like `PageElement.encode`. Most people won't need to think about
+    formatters, and most people who need to think about them can pass
+    in one of these predefined strings as `formatter` rather than
+    making a new Formatter object:
+
+    For HTML documents:
+     * 'html' - HTML entity substitution for generic HTML documents. (default)
+     * 'html5' - HTML entity substitution for HTML5 documents.
+     * 'minimal' - Only make the substitutions necessary to guarantee
+                   valid HTML.
+     * None - Do not perform any substitution. This will be faster
+              but may result in invalid markup.
+
+    For XML documents:
+     * 'html' - Entity substitution for XHTML documents.
+     * 'minimal' - Only make the substitutions necessary to guarantee
+                   valid XML. (default)
+     * None - Do not perform any substitution. This will be faster
+              but may result in invalid markup.
+    """
+    # Registries of XML and HTML formatters.
+    XML_FORMATTERS = {}
+    HTML_FORMATTERS = {}
+
+    HTML = 'html'
+    XML = 'xml'
+
+    HTML_DEFAULTS = dict(
+        cdata_containing_tags=set(["script", "style"]),
+    )
+
+    def _default(self, language, value, kwarg):
+        if value is not None:
+            return value
+        if language == self.XML:
+            return set()
+        return self.HTML_DEFAULTS[kwarg]
+
+    def __init__(
+            self, language=None, entity_substitution=None,
+            void_element_close_prefix='/', cdata_containing_tags=None,
+    ):
+        """Constructor.
+
+        :param language: This should be Formatter.XML if you are formatting
+           XML markup and Formatter.HTML if you are formatting HTML markup.
+
+        :param entity_substitution: A function to call to replace special
+           characters with XML/HTML entities. For examples, see 
+           bs4.dammit.EntitySubstitution.substitute_html and substitute_xml.
+        :param void_element_close_prefix: By default, void elements
+           are represented as  (XML rules) rather than 
+           (HTML rules). To get , pass in the empty string.
+        :param cdata_containing_tags: The list of tags that are defined
+           as containing CDATA in this dialect. For example, in HTML,
+           
+
This numeric entity is missing the final semicolon:
+ +
a
+
This document contains (do you see it?)
+
This document ends with That attribute value was bogus
+The doctype is invalid because it contains extra whitespace +
That boolean attribute had no value
+
Here's a nonexistent entity: &#foo; (do you see it?)
+
This document ends before the entity finishes: > +

Paragraphs shouldn't contain block display elements, but this one does:

you see?

+Multiple values for the same attribute. +
Here's a table
+
+
This tag contains nothing but whitespace:
+

This p tag is cut off by

the end of the blockquote tag
+
Here's a nested table:
foo
This table contains bare markup
+ +
This document contains a surprise doctype
+ +
Tag name contains Unicode characters
+ + +""" + + +class SoupTest(unittest.TestCase): + + @property + def default_builder(self): + return default_builder + + def soup(self, markup, **kwargs): + """Build a Beautiful Soup object from markup.""" + builder = kwargs.pop('builder', self.default_builder) + return BeautifulSoup(markup, builder=builder, **kwargs) + + def document_for(self, markup, **kwargs): + """Turn an HTML fragment into a document. + + The details depend on the builder. + """ + return self.default_builder(**kwargs).test_fragment_to_document(markup) + + def assertSoupEquals(self, to_parse, compare_parsed_to=None): + builder = self.default_builder + obj = BeautifulSoup(to_parse, builder=builder) + if compare_parsed_to is None: + compare_parsed_to = to_parse + + self.assertEqual(obj.decode(), self.document_for(compare_parsed_to)) + + def assertConnectedness(self, element): + """Ensure that next_element and previous_element are properly + set for all descendants of the given element. + """ + earlier = None + for e in element.descendants: + if earlier: + self.assertEqual(e, earlier.next_element) + self.assertEqual(earlier, e.previous_element) + earlier = e + + def linkage_validator(self, el, _recursive_call=False): + """Ensure proper linkage throughout the document.""" + descendant = None + # Document element should have no previous element or previous sibling. + # It also shouldn't have a next sibling. + if el.parent is None: + assert el.previous_element is None,\ + "Bad previous_element\nNODE: {}\nPREV: {}\nEXPECTED: {}".format( + el, el.previous_element, None + ) + assert el.previous_sibling is None,\ + "Bad previous_sibling\nNODE: {}\nPREV: {}\nEXPECTED: {}".format( + el, el.previous_sibling, None + ) + assert el.next_sibling is None,\ + "Bad next_sibling\nNODE: {}\nNEXT: {}\nEXPECTED: {}".format( + el, el.next_sibling, None + ) + + idx = 0 + child = None + last_child = None + last_idx = len(el.contents) - 1 + for child in el.contents: + descendant = None + + # Parent should link next element to their first child + # That child should have no previous sibling + if idx == 0: + if el.parent is not None: + assert el.next_element is child,\ + "Bad next_element\nNODE: {}\nNEXT: {}\nEXPECTED: {}".format( + el, el.next_element, child + ) + assert child.previous_element is el,\ + "Bad previous_element\nNODE: {}\nPREV: {}\nEXPECTED: {}".format( + child, child.previous_element, el + ) + assert child.previous_sibling is None,\ + "Bad previous_sibling\nNODE: {}\nPREV {}\nEXPECTED: {}".format( + child, child.previous_sibling, None + ) + + # If not the first child, previous index should link as sibling to this index + # Previous element should match the last index or the last bubbled up descendant + else: + assert child.previous_sibling is el.contents[idx - 1],\ + "Bad previous_sibling\nNODE: {}\nPREV {}\nEXPECTED {}".format( + child, child.previous_sibling, el.contents[idx - 1] + ) + assert el.contents[idx - 1].next_sibling is child,\ + "Bad next_sibling\nNODE: {}\nNEXT {}\nEXPECTED {}".format( + el.contents[idx - 1], el.contents[idx - 1].next_sibling, child + ) + + if last_child is not None: + assert child.previous_element is last_child,\ + "Bad previous_element\nNODE: {}\nPREV {}\nEXPECTED {}\nCONTENTS {}".format( + child, child.previous_element, last_child, child.parent.contents + ) + assert last_child.next_element is child,\ + "Bad next_element\nNODE: {}\nNEXT {}\nEXPECTED {}".format( + last_child, last_child.next_element, child + ) + + if isinstance(child, Tag) and child.contents: + descendant = self.linkage_validator(child, True) + # A bubbled up descendant should have no next siblings + assert descendant.next_sibling is None,\ + "Bad next_sibling\nNODE: {}\nNEXT {}\nEXPECTED {}".format( + descendant, descendant.next_sibling, None + ) + + # Mark last child as either the bubbled up descendant or the current child + if descendant is not None: + last_child = descendant + else: + last_child = child + + # If last child, there are non next siblings + if idx == last_idx: + assert child.next_sibling is None,\ + "Bad next_sibling\nNODE: {}\nNEXT {}\nEXPECTED {}".format( + child, child.next_sibling, None + ) + idx += 1 + + child = descendant if descendant is not None else child + if child is None: + child = el + + if not _recursive_call and child is not None: + target = el + while True: + if target is None: + assert child.next_element is None, \ + "Bad next_element\nNODE: {}\nNEXT {}\nEXPECTED {}".format( + child, child.next_element, None + ) + break + elif target.next_sibling is not None: + assert child.next_element is target.next_sibling, \ + "Bad next_element\nNODE: {}\nNEXT {}\nEXPECTED {}".format( + child, child.next_element, target.next_sibling + ) + break + target = target.parent + + # We are done, so nothing to return + return None + else: + # Return the child to the recursive caller + return child + + +class HTMLTreeBuilderSmokeTest(object): + + """A basic test of a treebuilder's competence. + + Any HTML treebuilder, present or future, should be able to pass + these tests. With invalid markup, there's room for interpretation, + and different parsers can handle it differently. But with the + markup in these tests, there's not much room for interpretation. + """ + + def test_empty_element_tags(self): + """Verify that all HTML4 and HTML5 empty element (aka void element) tags + are handled correctly. + """ + for name in [ + 'area', 'base', 'br', 'col', 'embed', 'hr', 'img', 'input', 'keygen', 'link', 'menuitem', 'meta', 'param', 'source', 'track', 'wbr', + 'spacer', 'frame' + ]: + soup = self.soup("") + new_tag = soup.new_tag(name) + self.assertEqual(True, new_tag.is_empty_element) + + def test_special_string_containers(self): + soup = self.soup( + "" + ) + assert isinstance(soup.style.string, Stylesheet) + assert isinstance(soup.script.string, Script) + + soup = self.soup( + "" + ) + assert isinstance(soup.style.string, Stylesheet) + # The contents of the style tag resemble an HTML comment, but + # it's not treated as a comment. + self.assertEqual("", soup.style.string) + assert isinstance(soup.style.string, Stylesheet) + + def test_pickle_and_unpickle_identity(self): + # Pickling a tree, then unpickling it, yields a tree identical + # to the original. + tree = self.soup("foo") + dumped = pickle.dumps(tree, 2) + loaded = pickle.loads(dumped) + self.assertEqual(loaded.__class__, BeautifulSoup) + self.assertEqual(loaded.decode(), tree.decode()) + + def assertDoctypeHandled(self, doctype_fragment): + """Assert that a given doctype string is handled correctly.""" + doctype_str, soup = self._document_with_doctype(doctype_fragment) + + # Make sure a Doctype object was created. + doctype = soup.contents[0] + self.assertEqual(doctype.__class__, Doctype) + self.assertEqual(doctype, doctype_fragment) + self.assertEqual( + soup.encode("utf8")[:len(doctype_str)], + doctype_str + ) + + # Make sure that the doctype was correctly associated with the + # parse tree and that the rest of the document parsed. + self.assertEqual(soup.p.contents[0], 'foo') + + def _document_with_doctype(self, doctype_fragment, doctype_string="DOCTYPE"): + """Generate and parse a document with the given doctype.""" + doctype = '' % (doctype_string, doctype_fragment) + markup = doctype + '\n

foo

' + soup = self.soup(markup) + return doctype.encode("utf8"), soup + + def test_normal_doctypes(self): + """Make sure normal, everyday HTML doctypes are handled correctly.""" + self.assertDoctypeHandled("html") + self.assertDoctypeHandled( + 'html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"') + + def test_empty_doctype(self): + soup = self.soup("") + doctype = soup.contents[0] + self.assertEqual("", doctype.strip()) + + def test_mixed_case_doctype(self): + # A lowercase or mixed-case doctype becomes a Doctype. + for doctype_fragment in ("doctype", "DocType"): + doctype_str, soup = self._document_with_doctype( + "html", doctype_fragment + ) + + # Make sure a Doctype object was created and that the DOCTYPE + # is uppercase. + doctype = soup.contents[0] + self.assertEqual(doctype.__class__, Doctype) + self.assertEqual(doctype, "html") + self.assertEqual( + soup.encode("utf8")[:len(doctype_str)], + b"" + ) + + # Make sure that the doctype was correctly associated with the + # parse tree and that the rest of the document parsed. + self.assertEqual(soup.p.contents[0], 'foo') + + def test_public_doctype_with_url(self): + doctype = 'html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"' + self.assertDoctypeHandled(doctype) + + def test_system_doctype(self): + self.assertDoctypeHandled('foo SYSTEM "http://www.example.com/"') + + def test_namespaced_system_doctype(self): + # We can handle a namespaced doctype with a system ID. + self.assertDoctypeHandled('xsl:stylesheet SYSTEM "htmlent.dtd"') + + def test_namespaced_public_doctype(self): + # Test a namespaced doctype with a public id. + self.assertDoctypeHandled('xsl:stylesheet PUBLIC "htmlent.dtd"') + + def test_real_xhtml_document(self): + """A real XHTML document should come out more or less the same as it went in.""" + markup = b""" + + +Hello. +Goodbye. +""" + soup = self.soup(markup) + self.assertEqual( + soup.encode("utf-8").replace(b"\n", b""), + markup.replace(b"\n", b"")) + + def test_namespaced_html(self): + """When a namespaced XML document is parsed as HTML it should + be treated as HTML with weird tag names. + """ + markup = b"""content""" + soup = self.soup(markup) + self.assertEqual(2, len(soup.find_all("ns1:foo"))) + + def test_processing_instruction(self): + # We test both Unicode and bytestring to verify that + # process_markup correctly sets processing_instruction_class + # even when the markup is already Unicode and there is no + # need to process anything. + markup = """""" + soup = self.soup(markup) + self.assertEqual(markup, soup.decode()) + + markup = b"""""" + soup = self.soup(markup) + self.assertEqual(markup, soup.encode("utf8")) + + def test_deepcopy(self): + """Make sure you can copy the tree builder. + + This is important because the builder is part of a + BeautifulSoup object, and we want to be able to copy that. + """ + copy.deepcopy(self.default_builder) + + def test_p_tag_is_never_empty_element(self): + """A

tag is never designated as an empty-element tag. + + Even if the markup shows it as an empty-element tag, it + shouldn't be presented that way. + """ + soup = self.soup("

") + self.assertFalse(soup.p.is_empty_element) + self.assertEqual(str(soup.p), "

") + + def test_unclosed_tags_get_closed(self): + """A tag that's not closed by the end of the document should be closed. + + This applies to all tags except empty-element tags. + """ + self.assertSoupEquals("

", "

") + self.assertSoupEquals("", "") + + self.assertSoupEquals("
", "
") + + def test_br_is_always_empty_element_tag(self): + """A
tag is designated as an empty-element tag. + + Some parsers treat

as one
tag, some parsers as + two tags, but it should always be an empty-element tag. + """ + soup = self.soup("

") + self.assertTrue(soup.br.is_empty_element) + self.assertEqual(str(soup.br), "
") + + def test_nested_formatting_elements(self): + self.assertSoupEquals("") + + def test_double_head(self): + html = ''' + + +Ordinary HEAD element test + + + +Hello, world! + + +''' + soup = self.soup(html) + self.assertEqual("text/javascript", soup.find('script')['type']) + + def test_comment(self): + # Comments are represented as Comment objects. + markup = "

foobaz

" + self.assertSoupEquals(markup) + + soup = self.soup(markup) + comment = soup.find(text="foobar") + self.assertEqual(comment.__class__, Comment) + + # The comment is properly integrated into the tree. + foo = soup.find(text="foo") + self.assertEqual(comment, foo.next_element) + baz = soup.find(text="baz") + self.assertEqual(comment, baz.previous_element) + + def test_preserved_whitespace_in_pre_and_textarea(self): + """Whitespace must be preserved in
 and "
+        self.assertSoupEquals(pre_markup)
+        self.assertSoupEquals(textarea_markup)
+
+        soup = self.soup(pre_markup)
+        self.assertEqual(soup.pre.prettify(), pre_markup)
+
+        soup = self.soup(textarea_markup)
+        self.assertEqual(soup.textarea.prettify(), textarea_markup)
+
+        soup = self.soup("")
+        self.assertEqual(soup.textarea.prettify(), "")
+
+    def test_nested_inline_elements(self):
+        """Inline elements can be nested indefinitely."""
+        b_tag = "Inside a B tag"
+        self.assertSoupEquals(b_tag)
+
+        nested_b_tag = "

A nested tag

" + self.assertSoupEquals(nested_b_tag) + + double_nested_b_tag = "

A doubly nested tag

" + self.assertSoupEquals(nested_b_tag) + + def test_nested_block_level_elements(self): + """Block elements can be nested.""" + soup = self.soup('

Foo

') + blockquote = soup.blockquote + self.assertEqual(blockquote.p.b.string, 'Foo') + self.assertEqual(blockquote.b.string, 'Foo') + + def test_correctly_nested_tables(self): + """One table can go inside another one.""" + markup = ('' + '' + "') + + self.assertSoupEquals( + markup, + '
Here's another table:" + '' + '' + '
foo
Here\'s another table:' + '
foo
' + '
') + + self.assertSoupEquals( + "" + "" + "
Foo
Bar
Baz
") + + def test_multivalued_attribute_with_whitespace(self): + # Whitespace separating the values of a multi-valued attribute + # should be ignored. + + markup = '
' + soup = self.soup(markup) + self.assertEqual(['foo', 'bar'], soup.div['class']) + + # If you search by the literal name of the class it's like the whitespace + # wasn't there. + self.assertEqual(soup.div, soup.find('div', class_="foo bar")) + + def test_deeply_nested_multivalued_attribute(self): + # html5lib can set the attributes of the same tag many times + # as it rearranges the tree. This has caused problems with + # multivalued attributes. + markup = '
' + soup = self.soup(markup) + self.assertEqual(["css"], soup.div.div['class']) + + def test_multivalued_attribute_on_html(self): + # html5lib uses a different API to set the attributes ot the + # tag. This has caused problems with multivalued + # attributes. + markup = '' + soup = self.soup(markup) + self.assertEqual(["a", "b"], soup.html['class']) + + def test_angle_brackets_in_attribute_values_are_escaped(self): + self.assertSoupEquals('', '') + + def test_strings_resembling_character_entity_references(self): + # "&T" and "&p" look like incomplete character entities, but they are + # not. + self.assertSoupEquals( + "

• AT&T is in the s&p 500

", + "

\u2022 AT&T is in the s&p 500

" + ) + + def test_apos_entity(self): + self.assertSoupEquals( + "

Bob's Bar

", + "

Bob's Bar

", + ) + + def test_entities_in_foreign_document_encoding(self): + # “ and ” are invalid numeric entities referencing + # Windows-1252 characters. - references a character common + # to Windows-1252 and Unicode, and ☃ references a + # character only found in Unicode. + # + # All of these entities should be converted to Unicode + # characters. + markup = "

“Hello” -☃

" + soup = self.soup(markup) + self.assertEqual("“Hello†-☃", soup.p.string) + + def test_entities_in_attributes_converted_to_unicode(self): + expect = '

' + self.assertSoupEquals('

', expect) + self.assertSoupEquals('

', expect) + self.assertSoupEquals('

', expect) + self.assertSoupEquals('

', expect) + + def test_entities_in_text_converted_to_unicode(self): + expect = '

pi\N{LATIN SMALL LETTER N WITH TILDE}ata

' + self.assertSoupEquals("

piñata

", expect) + self.assertSoupEquals("

piñata

", expect) + self.assertSoupEquals("

piñata

", expect) + self.assertSoupEquals("

piñata

", expect) + + def test_quot_entity_converted_to_quotation_mark(self): + self.assertSoupEquals("

I said "good day!"

", + '

I said "good day!"

') + + def test_out_of_range_entity(self): + expect = "\N{REPLACEMENT CHARACTER}" + self.assertSoupEquals("�", expect) + self.assertSoupEquals("�", expect) + self.assertSoupEquals("�", expect) + + def test_multipart_strings(self): + "Mostly to prevent a recurrence of a bug in the html5lib treebuilder." + soup = self.soup("

\nfoo

") + self.assertEqual("p", soup.h2.string.next_element.name) + self.assertEqual("p", soup.p.name) + self.assertConnectedness(soup) + + def test_empty_element_tags(self): + """Verify consistent handling of empty-element tags, + no matter how they come in through the markup. + """ + self.assertSoupEquals('


', "


") + self.assertSoupEquals('


', "


") + + def test_head_tag_between_head_and_body(self): + "Prevent recurrence of a bug in the html5lib treebuilder." + content = """ + + foo + +""" + soup = self.soup(content) + self.assertNotEqual(None, soup.html.body) + self.assertConnectedness(soup) + + def test_multiple_copies_of_a_tag(self): + "Prevent recurrence of a bug in the html5lib treebuilder." + content = """ + + + + + +""" + soup = self.soup(content) + self.assertConnectedness(soup.article) + + def test_basic_namespaces(self): + """Parsers don't need to *understand* namespaces, but at the + very least they should not choke on namespaces or lose + data.""" + + markup = b'4' + soup = self.soup(markup) + self.assertEqual(markup, soup.encode()) + html = soup.html + self.assertEqual('http://www.w3.org/1999/xhtml', soup.html['xmlns']) + self.assertEqual( + 'http://www.w3.org/1998/Math/MathML', soup.html['xmlns:mathml']) + self.assertEqual( + 'http://www.w3.org/2000/svg', soup.html['xmlns:svg']) + + def test_multivalued_attribute_value_becomes_list(self): + markup = b'' + soup = self.soup(markup) + self.assertEqual(['foo', 'bar'], soup.a['class']) + + # + # Generally speaking, tests below this point are more tests of + # Beautiful Soup than tests of the tree builders. But parsers are + # weird, so we run these tests separately for every tree builder + # to detect any differences between them. + # + + def test_can_parse_unicode_document(self): + # A seemingly innocuous document... but it's in Unicode! And + # it contains characters that can't be represented in the + # encoding found in the declaration! The horror! + markup = 'Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!' + soup = self.soup(markup) + self.assertEqual('Sacr\xe9 bleu!', soup.body.string) + + def test_soupstrainer(self): + """Parsers should be able to work with SoupStrainers.""" + strainer = SoupStrainer("b") + soup = self.soup("A bold statement", + parse_only=strainer) + self.assertEqual(soup.decode(), "bold") + + def test_single_quote_attribute_values_become_double_quotes(self): + self.assertSoupEquals("", + '') + + def test_attribute_values_with_nested_quotes_are_left_alone(self): + text = """a""" + self.assertSoupEquals(text) + + def test_attribute_values_with_double_nested_quotes_get_quoted(self): + text = """a""" + soup = self.soup(text) + soup.foo['attr'] = 'Brawls happen at "Bob\'s Bar"' + self.assertSoupEquals( + soup.foo.decode(), + """a""") + + def test_ampersand_in_attribute_value_gets_escaped(self): + self.assertSoupEquals('', + '') + + self.assertSoupEquals( + 'foo', + 'foo') + + def test_escaped_ampersand_in_attribute_value_is_left_alone(self): + self.assertSoupEquals('') + + def test_entities_in_strings_converted_during_parsing(self): + # Both XML and HTML entities are converted to Unicode characters + # during parsing. + text = "

<<sacré bleu!>>

" + expected = "

<<sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!>>

" + self.assertSoupEquals(text, expected) + + def test_smart_quotes_converted_on_the_way_in(self): + # Microsoft smart quotes are converted to Unicode characters during + # parsing. + quote = b"

\x91Foo\x92

" + soup = self.soup(quote) + self.assertEqual( + soup.p.string, + "\N{LEFT SINGLE QUOTATION MARK}Foo\N{RIGHT SINGLE QUOTATION MARK}") + + def test_non_breaking_spaces_converted_on_the_way_in(self): + soup = self.soup("  ") + self.assertEqual(soup.a.string, "\N{NO-BREAK SPACE}" * 2) + + def test_entities_converted_on_the_way_out(self): + text = "

<<sacré bleu!>>

" + expected = "

<<sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!>>

".encode("utf-8") + soup = self.soup(text) + self.assertEqual(soup.p.encode("utf-8"), expected) + + def test_real_iso_latin_document(self): + # Smoke test of interrelated functionality, using an + # easy-to-understand document. + + # Here it is in Unicode. Note that it claims to be in ISO-Latin-1. + unicode_html = '

Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!

' + + # That's because we're going to encode it into ISO-Latin-1, and use + # that to test. + iso_latin_html = unicode_html.encode("iso-8859-1") + + # Parse the ISO-Latin-1 HTML. + soup = self.soup(iso_latin_html) + # Encode it to UTF-8. + result = soup.encode("utf-8") + + # What do we expect the result to look like? Well, it would + # look like unicode_html, except that the META tag would say + # UTF-8 instead of ISO-Latin-1. + expected = unicode_html.replace("ISO-Latin-1", "utf-8") + + # And, of course, it would be in UTF-8, not Unicode. + expected = expected.encode("utf-8") + + # Ta-da! + self.assertEqual(result, expected) + + def test_real_shift_jis_document(self): + # Smoke test to make sure the parser can handle a document in + # Shift-JIS encoding, without choking. + shift_jis_html = ( + b'
'
+            b'\x82\xb1\x82\xea\x82\xcdShift-JIS\x82\xc5\x83R\x81[\x83f'
+            b'\x83B\x83\x93\x83O\x82\xb3\x82\xea\x82\xbd\x93\xfa\x96{\x8c'
+            b'\xea\x82\xcc\x83t\x83@\x83C\x83\x8b\x82\xc5\x82\xb7\x81B'
+            b'
') + unicode_html = shift_jis_html.decode("shift-jis") + soup = self.soup(unicode_html) + + # Make sure the parse tree is correctly encoded to various + # encodings. + self.assertEqual(soup.encode("utf-8"), unicode_html.encode("utf-8")) + self.assertEqual(soup.encode("euc_jp"), unicode_html.encode("euc_jp")) + + def test_real_hebrew_document(self): + # A real-world test to make sure we can convert ISO-8859-9 (a + # Hebrew encoding) to UTF-8. + hebrew_document = b'Hebrew (ISO 8859-8) in Visual Directionality

Hebrew (ISO 8859-8) in Visual Directionality

\xed\xe5\xec\xf9' + soup = self.soup( + hebrew_document, from_encoding="iso8859-8") + # Some tree builders call it iso8859-8, others call it iso-8859-9. + # That's not a difference we really care about. + assert soup.original_encoding in ('iso8859-8', 'iso-8859-8') + self.assertEqual( + soup.encode('utf-8'), + hebrew_document.decode("iso8859-8").encode("utf-8")) + + def test_meta_tag_reflects_current_encoding(self): + # Here's the tag saying that a document is + # encoded in Shift-JIS. + meta_tag = ('') + + # Here's a document incorporating that meta tag. + shift_jis_html = ( + '\n%s\n' + '' + 'Shift-JIS markup goes here.') % meta_tag + soup = self.soup(shift_jis_html) + + # Parse the document, and the charset is seemingly unaffected. + parsed_meta = soup.find('meta', {'http-equiv': 'Content-type'}) + content = parsed_meta['content'] + self.assertEqual('text/html; charset=x-sjis', content) + + # But that value is actually a ContentMetaAttributeValue object. + self.assertTrue(isinstance(content, ContentMetaAttributeValue)) + + # And it will take on a value that reflects its current + # encoding. + self.assertEqual('text/html; charset=utf8', content.encode("utf8")) + + # For the rest of the story, see TestSubstitutions in + # test_tree.py. + + def test_html5_style_meta_tag_reflects_current_encoding(self): + # Here's the tag saying that a document is + # encoded in Shift-JIS. + meta_tag = ('') + + # Here's a document incorporating that meta tag. + shift_jis_html = ( + '\n%s\n' + '' + 'Shift-JIS markup goes here.') % meta_tag + soup = self.soup(shift_jis_html) + + # Parse the document, and the charset is seemingly unaffected. + parsed_meta = soup.find('meta', id="encoding") + charset = parsed_meta['charset'] + self.assertEqual('x-sjis', charset) + + # But that value is actually a CharsetMetaAttributeValue object. + self.assertTrue(isinstance(charset, CharsetMetaAttributeValue)) + + # And it will take on a value that reflects its current + # encoding. + self.assertEqual('utf8', charset.encode("utf8")) + + def test_python_specific_encodings_not_used_in_charset(self): + # You can encode an HTML document using a Python-specific + # encoding, but that encoding won't be mentioned _inside_ the + # resulting document. Instead, the document will appear to + # have no encoding. + for markup in [ + b'' + b'' + ]: + soup = self.soup(markup) + for encoding in PYTHON_SPECIFIC_ENCODINGS: + if encoding in ( + 'idna', 'mbcs', 'oem', 'undefined', + 'string_escape', 'string-escape' + ): + # For one reason or another, these will raise an + # exception if we actually try to use them, so don't + # bother. + continue + encoded = soup.encode(encoding) + assert b'meta charset=""' in encoded + assert encoding.encode("ascii") not in encoded + + def test_tag_with_no_attributes_can_have_attributes_added(self): + data = self.soup("text") + data.a['foo'] = 'bar' + self.assertEqual('text', data.a.decode()) + + def test_worst_case(self): + """Test the worst case (currently) for linking issues.""" + + soup = self.soup(BAD_DOCUMENT) + self.linkage_validator(soup) + + +class XMLTreeBuilderSmokeTest(object): + + def test_pickle_and_unpickle_identity(self): + # Pickling a tree, then unpickling it, yields a tree identical + # to the original. + tree = self.soup("foo") + dumped = pickle.dumps(tree, 2) + loaded = pickle.loads(dumped) + self.assertEqual(loaded.__class__, BeautifulSoup) + self.assertEqual(loaded.decode(), tree.decode()) + + def test_docstring_generated(self): + soup = self.soup("") + self.assertEqual( + soup.encode(), b'\n') + + def test_xml_declaration(self): + markup = b"""\n""" + soup = self.soup(markup) + self.assertEqual(markup, soup.encode("utf8")) + + def test_python_specific_encodings_not_used_in_xml_declaration(self): + # You can encode an XML document using a Python-specific + # encoding, but that encoding won't be mentioned _inside_ the + # resulting document. + markup = b"""\n""" + soup = self.soup(markup) + for encoding in PYTHON_SPECIFIC_ENCODINGS: + if encoding in ( + 'idna', 'mbcs', 'oem', 'undefined', + 'string_escape', 'string-escape' + ): + # For one reason or another, these will raise an + # exception if we actually try to use them, so don't + # bother. + continue + encoded = soup.encode(encoding) + assert b'' in encoded + assert encoding.encode("ascii") not in encoded + + def test_processing_instruction(self): + markup = b"""\n""" + soup = self.soup(markup) + self.assertEqual(markup, soup.encode("utf8")) + + def test_real_xhtml_document(self): + """A real XHTML document should come out *exactly* the same as it went in.""" + markup = b""" + + +Hello. +Goodbye. +""" + soup = self.soup(markup) + self.assertEqual( + soup.encode("utf-8"), markup) + + def test_nested_namespaces(self): + doc = b""" + + + + + +""" + soup = self.soup(doc) + self.assertEqual(doc, soup.encode()) + + def test_formatter_processes_script_tag_for_xml_documents(self): + doc = """ + +""" + soup = BeautifulSoup(doc, "lxml-xml") + # lxml would have stripped this while parsing, but we can add + # it later. + soup.script.string = 'console.log("< < hey > > ");' + encoded = soup.encode() + self.assertTrue(b"< < hey > >" in encoded) + + def test_can_parse_unicode_document(self): + markup = 'Sacr\N{LATIN SMALL LETTER E WITH ACUTE} bleu!' + soup = self.soup(markup) + self.assertEqual('Sacr\xe9 bleu!', soup.root.string) + + def test_popping_namespaced_tag(self): + markup = 'b2012-07-02T20:33:42Zcd' + soup = self.soup(markup) + self.assertEqual( + str(soup.rss), markup) + + def test_docstring_includes_correct_encoding(self): + soup = self.soup("") + self.assertEqual( + soup.encode("latin1"), + b'\n') + + def test_large_xml_document(self): + """A large XML document should come out the same as it went in.""" + markup = (b'\n' + + b'0' * (2**12) + + b'') + soup = self.soup(markup) + self.assertEqual(soup.encode("utf-8"), markup) + + + def test_tags_are_empty_element_if_and_only_if_they_are_empty(self): + self.assertSoupEquals("

", "

") + self.assertSoupEquals("

foo

") + + def test_namespaces_are_preserved(self): + markup = 'This tag is in the a namespaceThis tag is in the b namespace' + soup = self.soup(markup) + root = soup.root + self.assertEqual("http://example.com/", root['xmlns:a']) + self.assertEqual("http://example.net/", root['xmlns:b']) + + def test_closing_namespaced_tag(self): + markup = '

20010504

' + soup = self.soup(markup) + self.assertEqual(str(soup.p), markup) + + def test_namespaced_attributes(self): + markup = '' + soup = self.soup(markup) + self.assertEqual(str(soup.foo), markup) + + def test_namespaced_attributes_xml_namespace(self): + markup = 'bar' + soup = self.soup(markup) + self.assertEqual(str(soup.foo), markup) + + def test_find_by_prefixed_name(self): + doc = """ +foo + bar + baz + +""" + soup = self.soup(doc) + + # There are three tags. + self.assertEqual(3, len(soup.find_all('tag'))) + + # But two of them are ns1:tag and one of them is ns2:tag. + self.assertEqual(2, len(soup.find_all('ns1:tag'))) + self.assertEqual(1, len(soup.find_all('ns2:tag'))) + + self.assertEqual(1, len(soup.find_all('ns2:tag', key='value'))) + self.assertEqual(3, len(soup.find_all(['ns1:tag', 'ns2:tag']))) + + def test_copy_tag_preserves_namespace(self): + xml = """ +""" + + soup = self.soup(xml) + tag = soup.document + duplicate = copy.copy(tag) + + # The two tags have the same namespace prefix. + self.assertEqual(tag.prefix, duplicate.prefix) + + def test_worst_case(self): + """Test the worst case (currently) for linking issues.""" + + soup = self.soup(BAD_DOCUMENT) + self.linkage_validator(soup) + + +class HTML5TreeBuilderSmokeTest(HTMLTreeBuilderSmokeTest): + """Smoke test for a tree builder that supports HTML5.""" + + def test_real_xhtml_document(self): + # Since XHTML is not HTML5, HTML5 parsers are not tested to handle + # XHTML documents in any particular way. + pass + + def test_html_tags_have_namespace(self): + markup = "" + soup = self.soup(markup) + self.assertEqual("http://www.w3.org/1999/xhtml", soup.a.namespace) + + def test_svg_tags_have_namespace(self): + markup = '' + soup = self.soup(markup) + namespace = "http://www.w3.org/2000/svg" + self.assertEqual(namespace, soup.svg.namespace) + self.assertEqual(namespace, soup.circle.namespace) + + + def test_mathml_tags_have_namespace(self): + markup = '5' + soup = self.soup(markup) + namespace = 'http://www.w3.org/1998/Math/MathML' + self.assertEqual(namespace, soup.math.namespace) + self.assertEqual(namespace, soup.msqrt.namespace) + + def test_xml_declaration_becomes_comment(self): + markup = '' + soup = self.soup(markup) + self.assertTrue(isinstance(soup.contents[0], Comment)) + self.assertEqual(soup.contents[0], '?xml version="1.0" encoding="utf-8"?') + self.assertEqual("html", soup.contents[0].next_element.name) + +def skipIf(condition, reason): + def nothing(test, *args, **kwargs): + return None + + def decorator(test_item): + if condition: + return nothing + else: + return test_item + + return decorator diff --git a/venv/Lib/site-packages/bs4/tests/__init__.py b/venv/Lib/site-packages/bs4/tests/__init__.py new file mode 100644 index 0000000..142c8cc --- /dev/null +++ b/venv/Lib/site-packages/bs4/tests/__init__.py @@ -0,0 +1 @@ +"The beautifulsoup tests." diff --git a/venv/Lib/site-packages/bs4/tests/__pycache__/__init__.cpython-36.pyc b/venv/Lib/site-packages/bs4/tests/__pycache__/__init__.cpython-36.pyc new file mode 100644 index 0000000000000000000000000000000000000000..5d5a4ced49daa42290bd1f413f4d545ea86d7aca GIT binary patch literal 209 zcmXr!<>k`3?Hn)7z`*brh~a<{$Z`PUVh$jY!Vtxf!Whh;$y6l~l98&Al$uytl9^VT zQ=DH~piq)pTvDv(r^$GWJw84qKRG^rB|{MtP#c)|73FLd6Iz^FR2&l=8tmcd7ZT%G zP~ehSk{IKYpPZNz6Ox)+5K~Z@oRL_Rt6PwmoSm4SnU@}eVuwjgQn5)4*rJ&D_{_Y_ alK6PNg34PQHo5sJr8%i~Am(cT=uH3(5rrD@gtHE6RXtVU~#ZGyXQNUL(?`Hy)%Q#uxi(amTe7}>?Nih zt%F+uw{F)lqiQ!k*TPGtCYmqn@=K=CAFtVsFSJm9p&ebqYzwnnc5Aq7FRL$BroQ-l z`Nc~1#bb>x@%lF!uM2jtdSciYz-jO%IL(~X=K7A-X-%&5M4ZGM(O@Il7aO}{Kj1=c zNU`U~i9Fi;4>Z)e9hSBp2zQ+LgK@AEjYl0LUCrk@#oYAb$GuOtpSm&@vL}UjkjYBd zrXHTB=12qiq#G9fNzg;u|1BgBdh3fo*V| z8{cTQ$xU9ty~3-!hI^G;dq9rEDbw4aY**Nr*ef5fq zfjbV80=77xBBbD(9>Z?vM|Ts~7Owb9kVKQ0jk&r__j{UDhYJ5Em(ke{rdr4~1 z>)V09+gVDRgs}7U*~6WO-Co*oV^}eH_-gD1sa0CI8br~{@km1@P;2)Yq0BQf-+q%<#4ykc}=lRaZ_q)CR{n6-8ZsPX0BhL-`Juw{hM@Qbi zD~H!duJ_X2^TWOVZsB(^{O(qNH@?+R$Ym9E=ACeJbd;{rd&i50yME{Rw z8Z+54tLlo$&zJ62_{ zHW$KmrkEMv%0PGlu>kJcsiLQWFvT;9 zAV{k6LCH4@ph`{!oKu*p2!H^du5ii_BRwV!a`?@WN>K~oQ(mnwd?WbU3ck=C3hp3~ zq$`g;#7Z$ioqJpEXHEEBlTF@R2;Pj45;RI~EnLdmKEWVC=Af+cE&J=+uaQGIG6k=E zOZg)kSNHVn`Ez80KWopl7nDJa#8mHVct=%<@`TmxAxNuIc#-6(rho#&8VENTOA*UD z)|;$^(e-%j?O)H?ol0sQx-vv?#_6)>Cf>eNJWJZH-ORGbGuMwroYp^cgRyuhWh6Vg z^1+lH&b#4Nh2u=DBZ5ODsO-rheJ9*yN*ysJ@ZNZpTV3 z#~}+iPFi=IVZ_G)jhl``IWT`ybFdN5apYTAU0xz`naJBjD9KF`iY~bX@>5)DzZNrf zYuRd8Eyef1FMovFJzO!x#rbL&Z=jZ0yvbX*FY#r*f_t5>@(Z{(c$=@`-sI~r_{Fq2 zQ{*;#MUk5-xy6hPFMu64Q26#lD5M+AmNeS2a!SfYiFN`fb5MD)3hTno6>wEnT99`t zu}GeezQC-QBx!8YE*7RuZK~I06d0I65+CD=uYn}0=Q?1=$eC}!^9Sg-j+xI+jMqAO z`+<3kyo=gNU5?qcEpiZaHcrYMNfv<@o0Q~I^C0rWv>NRm2rp4PE()HI9Ygu6-OODd zA&#nQctJd8=yZKImp_2B7V*=k-BK>mkOP(}`hPZ~<5}O7DibQCw!-)(22cwqzh;#4 zaH;0u1c+0?IOV|(tZW4IZfXposAI}sL%>|6D1xz;Z)0AiD=H+V>ev7rH9bxkmCAXL z7*lo8m}Rb;lV8oK_9NdGz0r)4zk@7NGV6)v70uK-RTSldtY7pzsEpM9rN*u+e^6if zKBmpV+SDKs*@3iCV;;PYsr^P-kJf&ya!4p~&ER<7Cm<>=a5NtPZ#3!e=bnMMp0G5M)ikaqlh`sPf+no zh>)O-`DAc*cX)GJ|5>+4aKtNoBhI0qgU>Oko2T?YE>!i+WR;pxo}v7|MZ&)VS>O;V zcW?t$z{qCc2x#KTR8QnKmGVof6J;W$=JQyc!e+T7>ljUF?p&3rNRzj*58He-9)G>S z&#GDVsO`J46NSQV(E!J6Cjyc7GCl5M>-*cKG zUgPm-6v+f_n&SpR8MA*O2($K-+!2Kdv~I4ls?lbXCkrih-gQdbkw=zoe^JP$ay;1# z2u}wz9o$d!DUY4ba0>SYh-Z;4e~Tq0m6Lho?o1;$<%6CZhLMWvDD-@Eep0*|qYXh< zUnK6o5pG$%Yf`#xLRdpLEW6R}a?+=m}%*Mws# z_<3@#=t9YzqTwPOJCleyc|PQt(;$m^N+Y?+X}*ua=H?>j@he{^+cjNEUg%_1q$(@& o0bQyAt!!3!FsSlt+8tMQBBf3kr7*}{Nt8f> z*#&J2m8PA_(}zwyo%Esqp)>sj{VC{UpZpi{(tc+_LZqyVlT0~-UF_{(_ndFfjUOI4 z;+*^UO~3xSru|17_)X#cQ#|s|D415$n9ht)ujw08cq@cPt57TG8Z&F=1Fik8sqx~v zBW~+j&C<2RVMXfyN@E5yYS!1*ms;ESl4w|Asb$w}tkhX49BDZs_)<`%h(r5i^sZ_BRWZoC@AD7d>Cx5CR| zv+l-%bGP1YhKvh$g}1o~VlLg4C}L_%R-WTj+I(40C+;l%>fYr=EVz-)U%ZXpv4yDH zS%^i@#M*KpJ-K_&?#H<>Y$456O8;y6p(ti`+VQJAzBzj?ZOX zSlw)|xJ^3Dz=kUg+FSouz{{t2Mcp+3q^u#>1K*z4>R z>SOE;_9p5{_7-~^^()MUBu=L%6j$yw>pj-xOi88}Ab^Jt_vPBdhn0qo1JRFadT!#8 zCW=@CbsfSH23nAYDS5d34^51q`MrPh>e3gI3%RuL#loH0`NgH#PG>HNgQa^>BM6rk zd8@P3*=Vc=qIJF#G}eO^aCxcDgKpeh?uHWLy0lc6mzDsDSgO+RORUn_NRKkU9CX9j z?*ZsF+86CaGm0CY;P7=>7=$cvaPRM+dA$Q zanGJl^?LHQ*Qw6BUcFk6LgvlXtFDZ35n6y?#hd9=7r|Rg@J1w9S+6;rKuGRK?QkQx zwEuNt5tr@vVpn!Mok+y)&MHAtEIWcAo7#^9(FWsXI^GE4#;TvUO_D%58b}B+zV&T4 z2vdt;6O03pQKL)S^Oqx0bFwLmqRZ2Qj73^%aVdipUM>n7->BK>R5vuWT7g&xRsFG; zz=%RhRyeeoVVsfRAIk(v+PDs~_>k-?ix^b`~-N&mOsx5*XM8;wvT^51y zmE>zL4(cIyo9x>3h3Tpn3p}iP2MgYlZUA*di`1N}15+>VOkIv5Zw5W*A6-H&*P@og9Lb$u}^@ail?h!6i9B%k+z>PCs@l~F| z0!b=BGt3eJ%rZ?qxqL9l_Mw&Ijj)#E?dNFh;0?;f0P=vlO=HbiD{O0oKcyB5^T~#H z{YfiyA3=dOqc#9k`CuA`6xR=z_1bhdUOs*(xUQ#I zPA@b0n_@q@TqqnR@Q#`}ll5aFt(9~YcP5n0?5Z9^Z>o1vOKpOe3j=K4>n@*Byv7-P zD2*_HWyd+7FTyZ-44&6Fd?=B4qr*KAmKF+a#MB)lnLU`}FF|pL`=r>_ZNG|_!?>^H zQHV`Q1jZT)T&Zv+lj3~x4!Gj+R?a#!>%ee;D@hMVNLwd!UIz1l$-N<3>|_(O!9A8k z|C4rkPMS)bL{YW|1YpvjAcdF~Lf%g8dDN({C|PM|BKu2z7)5eRFN=_v?*Rw(W z@t%S9nk`AzQpp!;Ff*Uu>n^!da4sUa zA7G4GC$+62 zbSzK>M>(5GEpxNDX64qeG@sn|So2X8nl)ICxo8FPDk+0@BpagHiK}rSR(R~qeyl6F;>zBYlF@KH1B z%04ilmPu`9eugrDXQ@eEMC~ZnL3;Vuc4OfE!!giUMN|w;T?ai$FoIwzT5>YVS{{rs6nVh*Vq6WNux~lMxW6^DpotKEB z8&Qk9C@4;l6?q(#z3cbyE-Kl%_wU{W;4yT)nw;`ZpFh9X(E0P#{A#{}^{Iov2rf7& zQSmN14%Nk{6sJ4+CjEg;3OcKDJP=PfdUEMthVD%y=OFnR`7ODE#$K9MX;#v+g!;m! zsqPBu^$~eVag~C?!n~xg#|vEeO;TQ)Zf*GL4S@k6JE*35FsQLI#43Jd5!VhVYesMZ z&kI^8<4Yrq+T;l!if%p%!Y)@@<_YnIoWKln0NkOx*_%+#8{$jOn-|fwuNN#SyOjqR zDh^W0d^V|OMt?RRRQ>(vV<=?EM)F8THT&FCS3S4!hob3Kg{I{QQG@U^M+(6IJvN!) zNV!l+4v@v{uQt$9( zBQ+`?q`KUtroVT*5Pk_%Sf~0O5hUuCmUsutcZa1FFQinLtm36l2%d*J)Q{S-_z*il zfO`p*xJ(75+OmQ!toR7U-|+OdnidsDm0K&y=+CvO0{+Xpv_}>2cTV<{zM)jaXcje0 z9+Y$2wKzXIsmvVk6o?F~%oS~`@MK!Kq%b>k+BZ{BoaB<4a8i+?p)2t1TMsD1Mj;z| za5V_-nQQvT9Crh7Z*}60zERpid|mKh9Z5H`UdUAcc)(xz;G@f*3;_OaKOWB@?0CqR zQ3wco2ixW~+np{hB7)#+(=oU^O_!iIWjH8z;EP~sT2xDVvMgqSp>kR+EJuKmnF11Q zFZs4(3Q|k0m4X7{UBF2vGZ_(k{GQGJJEmp6=24Qd32$>sPtG0!{y_@4ph@@Vf=iBx*@%f`97tqdm|;EGEL~V;lXu~ z!QBv}_^_b=3&aU3$lDSw6{o3qkBakD%usQGii=cyLPeDdVuhHaV&~lIO=)2Yx=F!G z7lmz_#+Ydzv8U{DyJTDTs6BzQWEbs8`?x(WXm)vA+(9MAQPf89r$?ptr-u4#rI!8q zb?@#C|NiH<7H-WirlWUTl+{-uX{z&VO)q3siT$jHqyUr%lA0jM3i5Z=C->TUYI_+I bt0Zqp%rOnyFyH~&xc(qM9Ci2wc{jpQMpaRX)2E|P1~5_*il@&9fwULxeq93Fd%j*K>`SL z7ic03^CZtCedtU7MgJ1|*eCymKIJppQc3@{t;3&SGb}~0`Ro_WhGB5CS z?Xku>%zdsgm)qPqv4T}_R+tBl$L$j{=rVIx>#f$0_qZ&jU*ZC%a^aVPbAMbVDZ|S?pK=kET>ATY&O92ETQ6{Et*19Dl9ypAqL>eRM&s`0RY5y6 zZ#>!i!Ub26|3y0=j-CNgIof@;`{dzpZ}hMzK90(0 z^fZs7bhO8_VpJT*2a(8rRYdVow4Y4(N8?i*ByhMl8q0el;9p8bL$XwBsIB6-xj`f0 zG|ISy0M0^rEK8n+ERW0Mg0Ist2{X;8x4&D(t?!ofy$C3kd%9RTRlfzo!|Us(EUIyu z#HXj+z?>g?%9kZAPD!Sr7*lrYqchVE5mzVrmwKrm8Z&syiSea@zBx0%H8F0L_RKtV z%FawbbWZdW@DXPeIaWP+HcgWw?q|6~z$^ubeBVD8S3jCE|MMhGJ%9cib6>z%x!}x? zCM9C4NaCXu7T;RTsn`0^pg$f=^1Od58oVIW4SOy;3$D;OY5{L-R%L~3>@<&<47^Cf z$I3lX@y1JJGL>aCjd|0Vhj3$&C-f|38NA@|2oFf$EO1B!XJ&bdPw(oUUVm_|@{71>o$7T#3p-IPnP|#RZ==)NiL9fyeZ({MRh-$w`mO$0 z4Enc-yw<5Nfp?1r<^&8ib3qyk)5|Sjlw}_Mt%_3lEneGGD8O9Rltt4rM6{O2Ay0Y6 z@fN2!u-*^LXkY#m2UpoZ11j~Ci(qmQvFD?7`4yBDGUl-M;57a_H~n!W>UGtRvf{pa zhk|cl)wk%mq4a!7yPKqt${S{V`y%SRw0d!FvMfcSvTltIL;R72XB3GBdWgfBacCTx zb9 zNAU51hj=Ub7bRQ;jbSL(x`U8 z;$BBk8g5oww^J_H=!GC}7eAp9Ym!V^V^J3GnQfRlStO3CLalr?I^R_GqOyBL4!FGS zdk`ndK{V4scwWZ;s=2Oj==JX|V#RZ1F06T)Qwak3FA%2xL;G4k)V|gzucAlRM0RE7 zb&V{cIM%+dc=i&{Q5>h`e5HM*PYhM3UD(rL+C2J=(mb@5`Zil3smx=m&oyLhV`ieN zcbT!TyhmAiZCKy$M}C|F?Cs4sm7D&7;FImmN~E`!`gfN4cQ^fnZEq%Qa}ZdVcek-< zO=DttV=GaOk+4|B13`;MSo{c0{t=|bUeGD;}V&j zMtg?Xq)BI`S!MZo9z5INV@#4Xr0MV;W5c|n*LxT7oAb}R)lcb~BL`cp=bvrV5X`q@ z0+N38q_&GDw?Imy{+WJVD^Y~y+if54f!CjppoF*OxxIctZ@NS@R+`}Y=|n}rr>ti- zHi0J_^N1fe_TQtl;;#mi7sYXvRVk3(qwO6KO<%{q9{yicUx-&q-|0DL1`|E(C1}T> z^Vy?MpFZ4q6gYo)Jp9vVkLKmNo%>6gc83Vn-j}SY4EdHy28T`(caLn z=o{wx3d*~$MARF}^%Y`ieca#p1r{cadiVWor?hJx;4!6n8gQI(eA z{4yX`!4+L1bUJaJ$eToN61hi&v|S>ktLT7GM`bzJungD2e+A8U*VSl@(Z!Eq_5qr_ z0&@DjF#lFC+q+$Jbr09FGZ=QOv5NUc-Xj^*c24D#+KGlbM)?kP3>nG59nq}Qzi^RE zj@yqaWdEg$p)cn`{Uj^e3xxE!5V?4-=Ow5RmO5Ir|*`2995EILM$SA zuyj*gWw{zAHom@;8#T6)i8l&$+!xhn)+-P;UR&>J_YtW?xlFShz5dzxK(v=QC3y(H z{556^ax3v0BKL`$XQ|&};IC-v9^vYO-g|3JC!}jXoWv~HSP-rPAIJC-!*`C_2{lca z#HMq#>{Fjno^*{S&N*94pF;`(jBs$h^aq$mLd zb_R?jlzpPFsr>_c+f)7$_P8hig*|QefO@VdCsm&;nn-n=#MU9+yKHN(B`Em#X`o^x+_i`Jr==V5)xTJn~yWpBk=N%rsso-Ptr7;Aqf ztjJ0)2`ljxzBJay)Vew0mKAq~%>s88xXX(B2d1@1Wj@+ya~VoA^i4jLBf-sZ*9n1e z(>?TDGZdVgozcK$T$p`6eS?^oJbRDycZT>3bd@hH>_x-wlrf9v6|*3WYNOMJhDT z$o)a^s0Xtd&?>ZdL(dHyA%UsKOJBFoY18ujeU8?~i|5rmWf&+Qw}>1V6@8%~keqwc z9QuHYZ+5sD2rfaZ+sP7a9t`{D05Lp+Rke$-9RD&f*?=Lpfp7xBMl6j{E5&sPoYYGK z?n*PArC$;&!!j%j&n!21u1JDUd7fS02ltLOR$^tfx4@J;tsX-2y*8do~S{XV0o#$OrYqCyq@RQU?+e5KfyFDl~Wluyx7Z-Y-NzZjt}~xVgUb z9Uv}uTHm$4{<7KL`7#I|I-#@k*zY>-PMdqdPH@!Sb%b{(aJu_W9{}6w90z~|$UoWX z$WM0Ag;G%)fv*Nf@eJcVXXJ+VBn00@olpy+Mek2DwtC^vIm=RTR$2BwOrDcHmH%Un zCi4{2PUhemoHMjO-HhI>2aTTZ-}>X)M`wRubR1Ey1&yeb@Q+{LI^KRbW&g_YhEx8r ztO~Ve9SN9*7>`?jA33fB^k+b0g?U2e;5A-Saqo{p+wa-J8TPpiu{j8jD!`RRAcX4F zn2cy`DrVE<={i3*yalHOpmm??Q*h2d$Epx{eFX-r?x=X4LQlx6ZlgYbgvey;7M#^fZF^k$T=zy0u=&Gyr8 zw;D>Xm_WJGF%e>;>kmUPP{xHM0+jdTG?b(P)WjqPmf_=Zc4ETCh?q|>_k!>UQAR7* zVZX#0MD#q()9Az3;CuG$iT}}BjHgW)vI!(4U?*7mGyO;T1r5PcuwMHC66SB@3E9Sk zN-;gu!T$7^PSPsF(y#(o;WF9Fj_H18OeZOpZMLzZ3*Pkwi_-OuS!+aky#w^t|6dPg z*9CmcfixM1g4~5$i76~AMr<57VhB!_aT(UTyY}(BP?2yGbdOU2XiPg+K0&!HMm)}X zTuP_UCosqKKp@f=0e)8jd|JQo#zNoO18GCqvXcww0m5VeA*nK*hrg(CHR`Vuc;W~Q zY%I&l5ud{3xQemV&l-dux&sdgdEBY49`3?NyJ3wCFx4=c2jc=lnQyQ$_sSQE3F28XU>^5qQ zSprRhDs@w|nc64dSu7PQ!&YIEWSA=MkX@Kb*tRkKpXWM+f7S0}%RIS7?yas>*H-UC zHVNMk8}-oXo4_8*d#k%)7_8T7qajqT-r$h4I~Ye2nj>EA`rc||atYLHaHvsFcCDWX zSWgyC(Ctrl#fbm1f?k>QcBy>in85sH`p{F@@F~#!cbRcl4kZ%3Cw3UKofwf)TRLL&xa%_|Y-rpjN1aFmu zhqeP|s=8P?l!+@+U_lfGBzR+s5)!N)%8biwj*WU!+8St~M38jNfN}~Y6+BR+vp{cX XI-S*K_3Q9nq)S;1*2*;hS@QfJYi1g= literal 0 HcmV?d00001 diff --git a/venv/Lib/site-packages/bs4/tests/__pycache__/test_soup.cpython-36.pyc b/venv/Lib/site-packages/bs4/tests/__pycache__/test_soup.cpython-36.pyc new file mode 100644 index 0000000000000000000000000000000000000000..988bb6a74eeaa386976f5ba7c7027a3c9ce6e372 GIT binary patch literal 28289 zcmc(I3v?XUdEU&vv3L>$K~f^cB}9RSB!Z$WE3^nfq(D+)EUjn~66H1B^zG&;IWP%M|2v+j%7DaW9P&^c53Bmw~gDAN8N<=iR<*`)UBLx+nh>qPu0ZDiCv$j zIz4H>@4qv%vx{92q~vtLx%0YnAOHQ|_rI6#?C($C{hgWoB}>y@*FwKxTp!0_e@fRh zL(6G~ZbZs@C6bHiypEQmxu~3DxtN^exwxDYxdhI!ae`u0qReD@XJuDz zS7mo@cjcDcE%Gi^zO}L^x2LizLU(&?)5$wlp=f3&Vs3x%0rR`V&0D8d zHRajpBX!fZP2+@fwqV(&b)jHYOVtH;@8bn)xxRAFw9U#~*|Z<46$?(ye?1X#duJ`v ze7IgJ8>Z!ssE^N?OJ>nA4fX9r(%n6?$WoohO{Z|eajeo@-7%jklZ5gW1BfrG*08iOgvIMARKVRdq_v z>iPPdjWeKDtGfM9R!ha2VV*2hDkaAqI#tCTvY#{^G`nWuvq8h0FVxFUe$MMzicR1E zxCwi?v~p%1T_1ex>Bk=ncYjCQgVQyOIZnS3#i+ayK~m)S_&+{4j>CQoiK*pu5MKnS z9C<+l!9|T2&M_lyByf%!Nh5`G!ss#5I46x>qYvkl(Qgdk++z$HLpZ06Z9tY`caUub zL~X~avrk3Pw*1%>9y2)XF(ih5MPJjFv^9NQzY@U}8n6~wiY!IfN&gy(i)QQn$T_um zWcu1G*It=AjEi~@t@!R0 z&DxLCo9~}_;Np|`$i8^~$@7n%$j)9ov9fX!xh@jV$`@zN%F4x+)grK`atI^7Tv)(3 zU7YjrpY`5%an8Q)qC?t{G|z{VE357vwNkZ5{LPwS0$ zw83h!+lOxvh>amaaK2X4<_eYov61xdCSn2?YY!S_?PYQsl8LCbkJlL{gquK3B^)|; zSXv6oG!4Tv^3{5|`~aSZJ2%#`b1k2=bc@3(xN$cw41Gy6BESn`h`t8QSb#iuDI$0h zF=D`zCYHn%mYlox`DMFu?JL&8wJ+F>c-@&lbQBzEn|Zldt{Z0F^r+2l^j#`djoKyq z(2;xJbuZ8;A+O9PB;0K^tAr6Rl!LE4khnc4_h>#qsCK}ND!{Z?`A`!$X?;j(MK^1MnE2S%Sy`aB!V?N&o0BU2|d(C8;&Nq_0JO zHuBT@mFNW!*L$^d+S8g7UyCj!K!6eZDJS_nft`|)dS24`W~qmy$frhH?$dIw&ueQe zDHf_}V2roqOiB_XYylAI1s)Ji3#CR&b5KU9q`(eS50~vS$wFRzFDZW7MY#sbwT4* zsIvLIo6hGeHKShUbzeSD4NASS4zYEUOpY+Qmk9?dH$n}+zLK{?nysB1DHf{vISARx zinE$G%K{ba-TbPFf^sEz7$g6D1aIvDBq=?WOhvpu-Fgs@-1xK9(r2OVMxQNMxj{hv zSt@zds!OH~%1c;gv1S>r<|aL5c|x}+XdCS#3;KK*&vX3>L5#eVZ4qA^Gw(51FQJ$W z#-z@nv@=dE<1Cs@V}}}e2KD;#-8XeBxu`|RqM+-SvK2#taaz;1^+Y9|>-!7BQ>79ARi=#gkD zGNA8)(lV+K#nbwL9??hfgnx~RE#qWBlq5)MGotDp#YMX&qwJ3>5!fGE8QKc8kBAdB z5~5w`M(U-wN5VZ_G=mWox=}L#zm7E?=-6H`0SHfzYG%Gbl-*SPd3P)HB$UFxYfI$K@p>WkgGG z;4z(#oYX$>6l{suQk*^pSS_Ph;-GqzVrZ}@q?D%3!jcqMfEpi_n!~M0He2#cnO}h@ z`?8JksY0e?JUEsqRB_IXWkjFny}HVbO-~&zOgEBV1~*bNT)p7NC7&-qc2?k;DJk(` zgcR^bPf(7l-#H5^I1UKrB8BQiLg}znxfDO&&Z|>QP9wp4!2?oFGiNSMzFu53ty;}` zjIWyHOUiX`5ya?d3|z%~`xYcx3{Hk<0-Sp!KBN!FM)iT{Ezsmg^v0bXQKVyW0fLY} zk)uU%*egg{8TUsbxEI%v5p(qQ=tp$iiwg;?JA!+fkx?V*XjdbSepPoOSG7_U>^uVF zHTI(RqW)rJJ}OQo7^_S1b?v0K4!&xaorK3*ds`X4jy4=;V`!u_89teCjoN` zZhrAAXq4YhYL#1VXJ`9h({_4})f;GNZ|IX>54H4txNhv5Dojt!(S0>_cy3xeSa`7i z(4k9S%la75mar_C2l>6d>ahrjqPTWJ>O)9;rjyK+M`Gy zP2>;KBoc#6=}SbY>UH*2b|@%-eH>-+n89Ix97!v~xIzZuGYtB$lnXcx|Br4YJ{>WV zU=2|Rl4U8n6f;u0p>n`a_2W+bc`y(0M#)q0UVV^HFW-iz$)yzJ$uNCW@GC{|#S-5= zvvh$Bs0SXH=#qG`WW3vS$ju1$M8{z*lwtE#3l3ZnLY*N;OH+S5tLSj6ti{Xz&j~k= zwNTmIGReM$mOgxAE%mbhL8zr4#C7AoDWml4^rL3ET$`L)!G#55$hz5)uo3x$X-H_@v|LF^gQr5i=bZi+TIZ8gDtYmV6q ze4A9GV2h~Djkb>o1_{1==x8Ey2~_KYOjx|DZ}5eoS@g4YAL8G`cOjx7$wc(N_-Jf2 zHUNp+xaUT32G)7I^tqi4JiugvNtOwnN!C+L{uYx3Ci`0xQoBVpKv>@pE?{5BLFXgj5Tt1Mw$~$8Vht0P&m=Br4ra5~ zBBS8)>yf5mcFP?$OaQUSGu4`tnXlEWMy6EF+<}=l;riB(qiDD1)a9bN!WlbXOQ*W# zMvG;;qcS#_SBk!G-hY>kq^hK0lBZ8k>5V<%PEEFD4}56sV8AfxLsCjGv?+1w%bbT& zed#cdJBg(v?XeF>pm;-5^338$_6=Hn6DeyMCyNDIXP8Le(yFPmR^7DT!>5FB?$+!R z`Lid^oj-LhfBx*LnKP%)oI07maN=C{O!g5U22?}cqz6AKEQE5zHuqM)W7V2(v)C(U zu{2*YjXy&M4<>zpO3V`|{y+yX>F{YFP}tNS9D)RsxNtaU2nRb%h5-omh!N#XV#rj0 z#>v@54>GZ=ubDRnlukk3L$x1$;?67M)7@Cc$uLdoMhL zH5mN7Q>(#fTBw`xp)`^}Fp$bXT_-pbvtHN83i#3iU%)69z z{WxE>k@19#P-|1*2Bah00P3nilBS^F%G}hT67LZoC{*M7XG0=O&EapQuwaD9qJt5d z$4%=98J-hmO^tZEf$PW9239Iy9DOVp&$o%-xG|FsF~MZ zabuj^#<=v&TeZq5Z@yfO3VoEhem-wbiKp5JBakzpaX5$%eT_>vDLsze2SYKFaQ)$- zXsN%4qd{|tqY=}0`Qy=nscfmdU_gk5v>JtEcHx48*x~Fr^a=yy@=`1iDUB#Z3FJy{ z2!v_Z;pF!x5)jSP7xnHw$2We2yO)3X+bfu$=PWKNXUyABf@;RahNua72LfTJOqy`x;KE!&xLY1zyaGV`Uf zQ0PRzn{qJg#Cvy`5n2X=cWPF_T0PBYG1756qAsJi7W=OdHNac?`)XiB!r7H;E|+G} zi-oFz!cu;a=NCTjN35Y|FF-3yz*##0Z(N_gE0Tr_9O6(S z?ONIWhBAWTYhmoCz2OUnOJQqAg~*|nKz1I7s}92lmf#JSuQA8$&Ha^5nV4pmjDCDS=KmNPDc=?eoPVV;U4&=mgMxiYlzK z4@!xZ$V*%Db*ns1$Wl8ZFBkz?q$0`@-U?0R>{7trAXpoFH^9^j6MMU$YOMnh-GRBk zGcf&FH2`Tc1kR`j&dCmZrX8H2A_9O$(Uw5GP2(mc-k`S%SM7kOnD4{P!V6#@$`Kr> z@rkC`I)KtPz_N*-^f@e8CZ=A|9r@A1CEeIONaXMbj$FY!13iLZexL)-8>ovPc>{uH z;b;rE1+H@_qmxl+UBrtA2}P+=Z=Bi!^b|IO)8r+Br|Cdy`Um4}^yLRXo<>xX2|r|z( zwm64mSxiWc*0W4jnXEB+p2-VLKFs7QlaDYFL~bTTX^Upf>o{cI3x2Ky;z^XRE0yX= z#Zp75R4ScHNxmZT1wRt!aUAvtl8qiO@PL>*!k(YU3`5a@@BJ)-1u#(@U^cl4j~SLi zrv&FsF*!~}Nj%yJ7^B9hB(8X~*EQPdf$`ZG&(>hxyw9}79Z{I}Z!=Rx^|UCqk7qPl zb=E}9G4zpq-j*K{R*WMN!{l`>CJ~)zxfDdMLJH2bWiEd!kWFU~YE1i}_r z(m2uyP9bw>V;LP{75)y}*=&bg$q@e}UOd7&Ib^Ab#rw`fNlT?Uln7`C_~Bsm#0di0 zVa4J;fl25doRbDtXW*PN`iy>@d(_Gc1hmVF3pg%ir3D<9vcdw6OIcR|$EB>Nfa6kD zQRIeyL~aOd8bq)W z=vxD#`>2d)eOhCfsC(_|bFY1B{n~G0vaY^cJsO{fN?tA>!}IZ)!&Mu{@D>b9F!Lp3 zatDKuGBYpkC>1tn&EsVVCx{O2UI$Rl+oj41v}@Kek`o?|mksqm zPB41T- z9n}R8M72!_a-5r_MLTXCjElq-dK#0b@qpwK*<8|t>1?BC%)1=@y0B=)UTE--hK%M1qh|wYjwabWfLu^m580`XO zDuB1gN)ZZQb@HlvSgz8Fl9ZMieCh+sqCt@1UJ82&K3WiJ2tAa0$B^Xv7cxE$cmv)N@Vp7 z^(`po=K9q;exgvUJLWNVZ@Y9ZcyL2KJ=KEa98IM{@CpTaeO0&)b$u0Hc#~5?PQEse z0mp0z*ZF?9ZLBYD1>JA9vbO zvJIB+-!d$3ysiM0#N(~8*a*zLC=edexh8b1YAsVm+&v|n!G$C6Iby{r;ztl!Vk7K4 z6R0>DS$1MiHhPs19HaWnps>IM5{=)}dEFR;qS`sK3Q6*KB{^=Z?PD4rCGdpnB6?5F zuXiKMO-jj5Y-@AexXC;OWig+(ej1RnKE~u_CZA;TcbWVwlTR`EdrW?w$)}lohRH85 z`9&uCTaz)~{uGk4Y&v1VQ1(<5KAaSMHwk!h5;({41U?D1HL)07V6I^Lj!h3fH!Ef;0;@0Gib+= znZk7}Wap%ptO4a*3#Y!|6XMEG+$C_tTNKnfSHMkiG^eK)&4MvKRl%MGvaSp^`GaG! zoO$dpJVEO1T+LXWo`L|IK3^zW*Ivm$4X^K?I?N~VBgxytQk=IG+WwJMfPEI?b{(A3 z`W*To_~G{XzCc+iO{3X*2NMpjXLMkmxhJn8c7aub6oLxGub(3-(0j${oP<6gk*F$+ z%Ab~Qjzyc(g1U1s-{4PcVnP#Z5|&~(5X^urIp%0#y`aH>#|r8tJr7$;g*agHkyGh0 z%!+AhQj=4f4q`^Vum`MW#_09Qycq+pl!exdFZ!1TeBT}?KE3(|m#IL7Xv}38VSqpZtRDx6sHY2Fj zo#xyEixN-^G8GgC`OD{r+$c6Vun0x0VZ$0n1B?HZ^TAD)YYW^J!A)Y0l6zmcNiSqf zyoqp9U9O<^K7Lbe0ulNSHT zC>FY5-a~>oJ7ArNx=w`#L(9PlWXf^e0B^)~1`LyVFU&bb*VPk+Mevs_aUG10RlH9a z&5)W%woa-DlRQGk#l!n~JXezBS8;`P$yho)Fe zmhvF=A(jossmuv=gH7i`Iz#7Pw6rU9#9NNHghm|jXohZW0*`>*ScHcq;Q0rpKOAx3(XkWeugUE#nTBU-=j41@z+fIu4t-EU4BaFwA5 z1o#Zw20vi6zU%BCvd{a2Lft%?hQR@y!ABB&K2^xT)%@U?U9DZJ6sl@@*VJLgrCXPf39}>Q zL<)RGUJ-IW(CCb72ig%u@cym~Y6&$1(J_NU1=mOS=2Gx)1c1d-tj4380X6`}7K)wd zYtzk?O7>;8`5!U)O(tJqvIB`bboPC-k3NyjzcYXK+$p5@TK|}j`#s@XM3}xWfQcuN zIbYzn#1xU}pJX?NaF>ep^&oaU#o0A3HI%o6UGU@B9miq+A`)L0GxWd7Nf>TnmD^_A zX6(ax*vJ_Bao%o>8Mos+VjM8;z)l-NrF_GQ2BOW31CWO-q(0s*<_$n86{- zi#E&*Kn-m1%+%?fICHR)DU=bIV60}mlJ^r^VG0B@0B#`D)Jj&olV~x)Ov>YS^i*5& z_(&-G4SkCPCtQjOwMW{mT@jpa`6ysbTk2ekYsKcWB~c@O=HoAaLTM89o&K}ep8M2m z&;8tM&wcu}=RTv}h5Kv$Hu7D!lWI!RZca_u4AAu2|HtgEh$rc8Om9fYN{0@2DWIk6 z0Sh7>_~Rdd3rc<5tdt)C#H?qC2)gxEoEirelv*av zT4&8}7D6h_vb}F&P5VBM?`{B*Oqgu}pIVCfQNZX+As7t`j1C?g8@pLpsMW;nun^Vu z&o;n7mX~h<2J(4J-vdrb9_CY2Fo&vkeq-Q-QiVxhSRft<_gGeBP+0XP&-4_bI^|RQ zyOtSM8X4BFF$r@Tssdu(w6oJ8;fzYH$y4q5%P=HTRQhCxvks65ebl-CbQAdf=gwwa zwBd6U8U&lcfNbe|-~IFd-N5c4wL?dbzU$qGj@HTaUj59gpM3SxufB5qPP>UkbIvj^ zb?o9?9ug?C9Z{#Cf^qVL|(g;%tG*SYuihkC#H=)(}A%OtGe*~9qy_2<9*`ugv@ z{`^;8fBy5C3*1@7SBDc#0V;M+bHJ5)r-Z0Wd}OpFOS8R>ZEiiccj_}2PRHo1$%9*Sfv6&4`}-r?HDvdPX*yxIAt() zpqWjI2%tgK52W$~Ezr1pr;#|ReP9fm-vKUbil$`i7`SOdaCv5kjv1(a$?kgwz8Rk+ zy}ucqoSb|!U3X3&I&|pB{ge9vjDLYjt#2^-LnNC4&fDIDfXnsG6^vlfL`S4k3^;%R z{KO^017HsUsFa$tEGJZ9&6|NR{CNO8GT#7rkKv}d#iF26%La=9GKqSl-W#z66~{(a zCFmCNOKiFYqjd+WYuj}3=5bDz!448CeJECoq`A~K+FYaF0WuqQfNX&Zic$*v7%tRi zKJXRFA_%f4Zc4hGuy1ps-b&JF)}}0(dI;)NhH)ow2R}G=Wb&S|V3V*1$DW)$eds%5 znTMw9N#q4S9r}x}{w{1zvM)ErSa-L_WSlB2N41LCqcy~#d8V}WuaUQ1ES(kh+9yhpt|2D=@g25qJk4w@h9I>#8K6NMuT;VH3+>`wQIhW@(Z}pd(_2QwWm9dO? z%;{C{n)RT{dN7prK$lWg&QQkNH_aH-jrBop5>w>V3mXYl3XtL7;%n=hO#X<;ze55n zXywB`3(hb{cNXmF9IW77SmnNofD^)ln?T;q5vF>iVR1U9B=ARHCp3QX-gTgtO$rM} zj%H=xWq$!TN}*v`AeeC2(|%S3V8XkOwc7N7W6cRe=eS%gi6v<?VrXD9?NE`PGwIdlqf^i{Jk0^~v+-*hAB44|nFKzx~B; z|MIuL%m~~xwkP=dAAR#<-}vVD{#0#HK&Va!J%_*t3~$I`FW&2u90wfq*FW5a#<0mK z{OjEMM;U6XK4Eibx>le4Y@I9}T%~o9YQOdO@mOKX7kTvxlV4`?D@+3H%I))fS9Z9L zeA9cS9QRa~)spC6$CdAvkHUX%{RUtC5);8oaYXLttt4Z(a+Q^}d9Mf-alsGMNg+=7 z2zCU-#LrL!o8{64-_7pYwBTUC2qzvhIBcawDfZ%PpJefhwFskOBxj4=F@g{O?_W=8 z1#MdUFWMX0ly+FI>m&dy|GV~C?oEx|qf%t_NA541AnpLrVs9ufe0W>sPPDAt8(fKz z$CiZNo^%+VO6^(fAWA@qCV<&6iMa0O-sn<501$HAKo>ZR8zTv|lJj2JlM7c|?hCio zT4X19@J>zU#nhfF*!gQc$~}db;@&(N7X>FkwS3c5)Kgi(PS|G_9bV#bl}KQ?e;zH} zojIeH@m)~+Fz<&)2n)f~O`e-F*YCnt_}cr|?V}(^y(gR>QzD|#Ht^6L_8FYaMychO zzsfdofZQJNI#i4RJvBAyZ4Dx3;U6>Ce?XFp&zCNn8=|8IL>31HJk9Y@d^Qb#Zz9?k z9ZJep2bnI^9()>%3`ZpxnTK(6c}f)9`(#lj*A4g~$8nfA4pe@G&D?fI8~NF=)UrQiP;{$7@$`2qRa%MdmH31{m+Gx-xFpmu>HXYkyV ziI5-eHmNZCJIo}rxk}Z|ZO8sM#btzrdOMzzk+^a3Lt(6YRA-EpLps#l+C5(P8P!Ddg(0nU-a-sp1XP_ zmyp$YmJ)MKS*K=BJ6HN4V(*=373NU%H28Nq!j1Uw7a7vAdn2#FGa~v(f+^!28i%^z zb>|wg6Qk|3LZ3x-*6%Rk_GH#SWAb}UexJ!TCSPOn2TZ=s z|H$M|nY_V7dfrUN`0Tfkd{akXISiOd=axeW?=LB5-VfmDP2G-Zq+O{&`KE~M!4I&~ zj^nToAZbn_b=!sr)_ls{MPSM5ZyB8Cf6x)oH%lc06#AaT1x-Dgurd`-Z;H^JZvcYE zf@eY%k*pgB8dSM~dOto_c;p2uGk7Be5W>P$z7a9MuvI8GN2b3Bf<0d;*^E;D4Kyh* z?fanfsBH^7^+I{#Hz})gH>p-hH^GKfHw|uxL-s-^NGxkf!wbmPh!!50+iKU;`hez~ zB}^OhnR*G12dJr>ktx=&&It}Q#7V*zg!QblIoem)VTw0yjMR9d+mNZw-f-yw2ok6u z5TYx%Q6Qk|LcNcHKXR>nu26Fb5brrRqN5na*sEO#UsBIy#HT8d%_FzOy>z)GGAmkXl zFa&$7b-|XnP*)I|ivO2`wm)JJ5u%F!Vc@q2ezHalS~mU=x*!KN?Aa|2#tl`NMb92U zb)mh8|D3nqWAc3_P4*#Iq#NtMBl$HpmE(xToXJ#7o^|FCR6#oM2=!mt#DL^+S9YmB z;m_MY#eKyuK#!S$f`ZVx^Pzpr0~A8_O%ytc8=?XAW1yYUHG}sW6;2ZpY z*a1~lVIJ;4@V|_Ja2LHY@#Y0#Ddbz{(s%XI*wgi#Qg9J1Q4RV`qm~^v(L9r2QqJwEicrt^+^o zhsgA;b_jJ?gvxv&*f&x02YB&^EP(8VsA=&DhD?kcu5_HZ$NDTzZgj<%vyfBS-?)ktb!b0CGR?R-ZQSE5DK{sN=C<;;DqkN>q414quE%FepI zXSlVBqpYaE#Cx~T^MJzVh)}+N!tW?`WT~|*(#@rafind}3H=mbe2xkA1R-hxU?*6+ zsVVh86HF6|Hfb;hMn%T}{vx{)L$L#C=uPpFcnW^4Aw0zv^^sd5Lm$$P3>iZ`!>5L( MhCifzVCc602PmLrwEzGB literal 0 HcmV?d00001 diff --git a/venv/Lib/site-packages/bs4/tests/__pycache__/test_tree.cpython-36.pyc b/venv/Lib/site-packages/bs4/tests/__pycache__/test_tree.cpython-36.pyc new file mode 100644 index 0000000000000000000000000000000000000000..11b9bf99ec0d290387241a2cde8cf32c7503c6f7 GIT binary patch literal 94447 zcmdSC3z%FI)X!8&EvdStQtQ$0QnjR#G_9HL>Yi5h^h~yTakF*2^4LdWKh6M(jg)uZR7{+5`^RTg(HP|x)HmnU^7K6Efy@oaH_`#ab z@U1<&>)ro95jXB5Z)TOcn`K?njhpus_r^IV&N*?;i4!NbY$+c8t+~qQ-bfAo-caZ- zi{GcQxt~Z64XL5>kV>g^Gqsv7r&Ii$X=ch9*=NgH+2_hR+2_l7?6b|`)k3)-_vV@- ztHpA0b+kMx$NA>i>X!1B)$#K9>elks)rs=N>bCN>)$Qf&t2@d&R(FOK3IM*HB>KNc}R`i!B<@!DL*VnMRQd8 zz>peMW1k#SWA*#)WXgMSyhV-Uc)Y$B$CEhTswQweA;R`%0)tVh3AtlmZ}ae;qq#=u~w?8Yt^-Ob-7+@EtNWMV{JKjNs%upwNQ8IO2=)R%hl3) z)p6?%UUIInv{ZNMYi)DNEnROko2A80qp3>DUW|UVe3n~UZd|Lc@jHrcb(J6ET(6(7Hxa$3a@LgNFAo(r8Yo?^DFqO>Ec(}Z>Lh;h+A)00Hj)d9!Fbj z)ok8W+MAH?tveN`zT9xz&Q0&07ZzT6@fB(10$_NS?WsFcV_x=^kDd6Cmzz6RZCAbF zxz_3`8t;vstJj)Ur`m3`*1X|!ty=r$dfgkj&~jF*?RMSqw$4|tHI}Q3&HAOb0}y-p zOEssl-uA}$2{;E#;I`tW_RVJ9U8&dGUS^@X?2RwfSJ#`>ws{QSTHphu2QTklZLD8h zLW6c(syFMkcFVbR6PUbl-f>!vm$|Xp^s-HqHA2Dv1Jly%>jC_z50z73VQH|X^e2bF zwlXSzFw)0)JjCq`UZgbe&`%v5(x*`nllS8*s=Z5a2Zl&*#4U%-T-QRhs zdbM71J3?;Nw$PlxdUe??t$=9dT&?9e_PrAvsS0XYtF4f7LBa0fQoUMRvCr39fJS4D6kRQmu$%Z80srArbxrZrYb94eSE|(= z-fOPArNx`J8n;F2tk+hmYYlgG+MchWW6B>vN}ycA=59x^46wXeyq&t0x-#_b>9?|% zhQ2^5%9tEQoS9#^d}e)}RCf8rR;}8+ zOhUN4eiL8etR7jf)~;5U(W1+XK0k0l7AG$+x+gCSn4|>B*Xz^kH@{Y#X@ah6t=09} zC;5^5;`~teQ6In4RTr;rYcg`DXPSJaJBuIwW9&Dh`mXbDb*jy&jF$!Imb@$h?~S=D ztxi)_K<>4bpj8fu%-a&F{iy!|L&M!&$sh18$?doAlH4wyBcJ@Qk=m$(2(|6R523a( zQkxpuW9SXoN;i}sVk+&hqdNDqu=FOFaUR6E=dt+&aU>ZZ1XMCjI1l0Jsf_b5D=)Fw z%Od(3Zz4hfsO@RoB<}&s8rS<9V!*CSgEYGRHw<`W0eYtP+xH-`O5v# z_sSQL`kZ@F9Kq%iXoYlPM9$Q3VE@=1PGghr3V0&;VOnK`AA%!dpHq1?jD21e)Cl&& zs;EY>FQ_r_yDi|C;7}JDYpVO{vsJfIGaQTD3DOoEOC6dr650@+IVZvuU9JfskGm@P zg*!#IH5;pq_NilsOKz*wXhYzywa6$%$l*32Utx)m`yHp%Szc-0G}4ch-NU5j*iJ%B zbM8AS5YEsQvZ}O7-$`A83=j?u`kKnlcS|$X*<&+DtFtqUv&UzSF3#fkggHLRV=uMn zrH)T!!4ANIgdV&s8{zE1OF(==8js<3oGP}k6%}bOc*sS57g02nDyF~~3?!!`H$_^* zn+LW=W!{Ed3$>=R3y;jK9GiXh%b&mMu73G%JIi1GVRz;z&UCimQsnehO1B{6$j6h= zox>;wHEEZ8ZD*}fYpHsL;!N825*`cM_HeXqJ$FYMnZu*fNWMBiSsMtulS=Un&<5m* z=zcqOC2O0To9`UJGiYw9p-xRk+BP{${Rs2~@hZ*l7K0OspxB41rg8!bF9SIyh!%K2 z0-&9PpmKi%Do}`=29&(Ifab%f*Uh6K+6$@f{$$`pABsR|02!!sTVQhsib+Tz?Ni}|T+Kp0-8Szq%Ce%~Dg zjGY1uxw8(v`eu6tV*GVNL5M#M;(ss-gV?>1##{2(j~W8*1YnaXva z+ideS1F9NDoy>N+`%n@g%ZJjEiAUV#)jgbQb}xnn2_ zxoob7(%mQ?Ypt}k!dTFQb4O1s>H0y&1b9X?P5V)GF%a^Fywox0QJnBnC%(!7)>n&i zaMDXX;qZ7K+NKEPNgQm*RCYvhCvRxuL3bPl(3W~2wL6tfJ&?(!x>hdtqD-EPV9PSz zXHicaTT~l0Tx0kE=Fo)-`mCagH8S7bYoSFIt4e8o5HjbW(X$|M9D>O?hXPc7hQH6E z@G`gyCn%a7QceUiQ#>(1`+7tmL!A@yr&6sz-c=fI#XmuzgKK{twOH}IE7d&~;z^pU z{RjHhV4+4VQ^fI0U9#N(YRf~n)3?%KKu|!nPJ=NGbQ&Iu;3A~b^W9-UiezI%ua~c` zuh*ezif$(I#f(HqAYQ5&;^Zhr64o!NZoOa-W$cJz6r*@ zZ$qWFj~7!D=;;&bu~c_of7IA#dvykf<5hy8;eIW&X zVWKcr$Q6nXuhg*FKT4?6*xa2cg3+gMkptMX-hH9(7ENFt+L#M7P~Ek}z687jKGoD4mh+gRY!+vdLp)Ii}@ViOB@u^4v!-^&fo<9+;tQIj?0-( z4K1OY!pcy9)pbKFgW3XX!-yJJTd^;y3AGLTQMFy|zOSe=uY}d z`p%GWZFs$403NDjMY4fkzYTtUq2b^)sBW5^+gjUbn@IkHf_JG7(=F_?^3dX@9;(g4 zTI?vAE9TKIXe9nC&5znFtVubyvtD<^0EFN5_G{~UM4P4F3C2WQ1*mhRQh_<&bumZ? zMgV18`w_NXk5udl&Xb7UJ`&jSi_kA+bB+F(lqo#0^ z*dXYr!SKO07Q`)AprcBI!%x{EMC1;kqT%@-kQf275CM&tH^EP$@NrJz`X#L{w6g8yp&*YW9y(L!_@GlpLVK(wY4jITtyIT&RMePQvNgea{? zLzzM*o6b|?dFEY1))E1^tfi(KI3N*_0L0%fUFS|3qiz|0x6|TKgB9a;24}L+kDv#U zx-O@0XK&~5Y)<7sd3j@(<54Dbl)se?9p!H2^pVkxx-ZPA#Q-SgXjHCMo1HowpT2YG_ffUBtwQt4W9Ysede^x8i;n2xEMP#L zQxow4-P3FU6?^AR79VHv2^Ji>JBuu8EEJ16ixn1EP&}fD^KWX zyz(|3g;(CL_JYApde${74ET(XFxx-)?Y4g$z(J^g1WpFJX5)Oe`}6&WhlUj`aWrX| z3WuxJ4Oo$R%#{J<SytOWbF?^ueQ2fw3A)#oZD zuYb+xmL1wwf8#Iz#&`Y3=YHc0UbfR-db&g40JZH#yIDUq>0g}G#$i~Oo)I+K6%w}7blDh$r>M{!l6B}Q&qjK6@Q5X7tsM_|%VOeN23baDYW^c9N+g3nl!J zM>fkqr6$WUBO^h|v=GOP=<6gf@tkGhH}gp3K47y`jWL(c3LQ-(8?3-DxItD{@ z_LdOAa1h}Q9hkiv?U|#vI$O@5^ip*3UdN|)3;tzo5zXKs-+@8_7T7w)dIk(U%(GM` zywp-$3x7P&C!k9Rlq%N|4_y&;d>v7iat?#jbvVXgDLh7Oh}Rr$PN?GCvM_~yU_(r7 zu6~Q7h~A;_^&uQt{ZUL63~Yk7VB{2>Sf8XH+1|rY^kw)b5RNVyvgjx#$mJGXVD?Gl zhri$U%!4s(Ag+mef|o{=lf{DlU;t)_d`_$aWBAd=Cjisf5Vjhc;d^Vfd!2&dmyXU~M}6RMmIaxWt$H{brJ%8|^$s|I;ep z*YWz-C$#C=p!?h{+@BA3qzBT4%uZ}DO?>!$$FM~ryA;h9bJ%1^8R}BP)b6BUj-ZKx zN<7r{Wn;dOkO3X*h4WDD^S@Vd!2S-eu5aTB~gjvoPD+hT}#$AFX9CIh^!@zVJ^pIQ`kff5C_SXb7F%F&tPl zy=OG&$#Nm%>Q&np1Pu^A5)-~TMD)~VI|5meMZ_!)0(PVW9YWd}Y=3W?$%44cAClQ~ z!kmWZyKm?Ya%}dswYNGghILpqVl&;u?_3{FdbYbGq0;WQo_qbkOLh@$67PvX4eQ#4 z`i*r=W~}SkSDIak-y*4pAfC4FG-^Q)X%jW#w1B%JpDr-k;4YBUQ=dgngiFzwA0Wpf z03&F6{fHWkFe{_;^WAR`c$LKV_7@w%p#=08;Y$4PX6}VmMkW?*aIfZkns=QgM%eBB zp(jyvSKZ}M0@vc4#{eN(6?bokx=bcdtUSLd@gYiUVvP~Z*lv@I85!<~CpyV&{x1w> z;M#kclVUgBGpcxE;d-l7#q_?7g0uX0(52V7!H$S|G8nBcZvI(3%I~8UeS=6aU%<(o z-R#G!kL~yQWL`J1J8V9DkSagGB=$rC;xYTXgIU}abYP5m8QU8tgHKs1$im8wA#nG_ z%~Qd~)mxGvC$$QD%(4p=+$eNfA(u&tv{x7pBF4N__0WNHgU|X3;6sSGgTgG6`|RWj zU_j29p>H^HSM+XQWfS{3)oUGx6pFvrih9gw9@5sJSkXEvu~xxMZAM;0W%#URgD{!r z)1pXktgj8M(gK&9&;s{S93*i<1TxBuNFB&;`5<(*$pAF?1=!8hG^lEDKi~+o9gI;x zwbaXfo)iFTKxF=@fHVg-NZV`7m;s^tiyfy?SDz!~$WCM)<-&&K`7T~zAsHc4GUE^= zH%q1%HTq=g3;jvseuGx*)1_9EAx?avAtQSlnRo=UDuC7T>_)4vSB+_$C&gV)1DfpJDOMEdBxu z`jbN=DCwP4PMju+|A@^kp};KZ(L$kcPhn?aT;kSoP#D3#Vqpt@W##S~ZnS^wQ>U?s zpF40^hi6c8#N!nSp-ZcXKwRlg@ONneLB6h>2xhNwG-V|s%n!YEg-{*35fOIGUoZsf zUkJp*#D>sOE;!;GH3?JB-Wj@`Y3FXiQ3*Rl=5`h#7VrefRsI_BpTC8=EAQ*GHJrbzUVyc&*c!TnAt8wG z)@P(&Y@6?nF{opwnoDP~_KaJ@`* zu_n{0%IRarb+=)Uni|ss@dkc7A7Vl0tG5+8e-&XGU@?*h#hKtGVP>3YAt4}Q-t-FO ziiHXK zsZ!f>+cOin?Pk`vrY`N~))jH?-y{484i04!sGiXA7kL|m z(!<36C&`&^u(uEs|~mAdc}`03`+9%_#H=VEz)sP4R6>) zDTk<5hL%O?Hd5oSlWxcGnxPctC!^o&q!DtbB}3gYCy9ia? zfgRXy&E^_i65t@#@M8eB^ZhI&EKkISbb%?H!<-#{0F-LV6zneZ#(;105l$L5(=jnc zCy$g8GIuUcA_?^c}p;%WLv8O$+rt>EBu`>RbMV4dnP`NO$cLyLM0Wgdyk$te~Ni zW*b?;0UOB~8heH$7Me~Ux`4FLR1>l&2^j1N4lMU_K98r3HMl&Cws3+`{G)Y%l>UZF zYpG&x<1Awu<}0nmE3lJ&jm@E)9fDNvKD$w4`d4I|i3)>e?!*C`1PxBGNjyp)O>rW* zz+ajs4N9I(rRNBRKDmMks+F}?MK|rgumJ=uytFcfCK(rfpO1XE)dQY3->tvepF~6J zT$B$OoA?SJKR3Bw@pYX zd}9i&HC+s|q7a{}=MpBjJN)40o7&sC&`{cpq4wii(7`bGYHcdA;+SZ_ljfTFY zPg5u2O&w-SHF54vbsy3S)0T|G zCeb4tph!Bmhd7x)4P6~^ zkU|!0dFX21VNfktwN5C=xDwJlNQ8ld0LOjU315!!VM+3~({FTsC~Z8S15v$PM>@(X zGD-$fB0(;|IAh5OGx3HsnYcooK7M|TUlpUtd!(BgqOxz=;eX;#YI_=~2{({x75m`S zc;e_z^oJkj>;DQ1PTh6Bz~YN6{u&F;hIRf1i>oZ`X9B3AS+EC*GV=T6@f0Nz!q0j9 z4B*d84xp^uQNu<1$6n7rnc(Rw!J+_@Knk4ookf;Sj?KI+uQ;t6%<*=UEw;4f3o%x; z$=TM+k|RenPEnJfajw*>94zVGVoTkGy;5#~X$^LCZSs{9Fs;F$9y$Sc%)$z0MQfX; zepaX30VfhI?OuFZOiMF-)rX$|RDeii+)o5+#RZyKs=Fi7a;=c{j?@K>1JLs~QS%AQPBYmnaS+rq8R5!ZPsdIKhwo%b>zA}xCPtgleL zgWulRT1e8EZw`8xP7FHI2G}aR@nG=XnArG0!gt$SEntaoc8@u1Vz%rfb^>)aGwspfv7z z-Zswf|I#>Li8Y7`8WdAvu4L=ld(9rTdqx~OvX%CA@7XLkO8ldNe=0vg05}_AX+GYX_kJDo> zx?W!lCstwMLOm~oO=$cAeh{qzT(k@U6w;#i!CY(2v>CV=av%nOhG*thmf??P7+RQj z2oM7y`#Pe5uM@3s0zX76EKn=d`d`48jOvH1Ih@%PWmls6%@GW&5E-9ke4PXWX}wjc z616_|JRNNz*fTZ(s^#S^^0-m9jRI&4|L@?sS9Ciyq$+Y-j;kj_f0lJczt!6zBpm&a zGyL`$De4|FAM+Kbey!2!xEtvollYh<{X-8* ztrmj3BZ6ZiCXGh&B|IF`NQikagZyiT=VEu6cL-Xk=*YbvBD6Fo0+SWV-utn4qm`g5 z^;+Y?T8U<-bNm45T858)nV&m>$PNLNKL$469;Zc}o7lhk<)95p#-d zlleWO$oq^V7#<>6BM|B7c`wVXmc6>kS5Sk`4jDun|K@#>51@J>fx8j?{3iB+Lu}%cfA^aN zfM_OJ+Sl}Cr&s!*spnmh)>~2jerPB}aaf-F`o1(;t6_c<`ZPGChWU5a1B!%+O|p9o zY^wj9Z6+zjB6-#zIgrdixtQ6goj_yJ4*RgiB6h`yZ6?h7LOP3RGCexW|45(&I?FHO z(C?%iVkCD`L7Lx^9=)YOkQDGRh2BmMm9<%X5C!zD%)*Jze@hN4IPe*fHb*9CFj+d9 zZ|833ac6=S189mKBUf`c^1~xCxAHi`L`tLM6eC)o{U9PPgKhIVj`MT4JqA%xao!@t zSRYiJvzuZWn^1MmqCPK6%Lh0z3@Oep;6&IATi#}&9#MU?QFd4dKD>j6%44_@)60zp zB)sI8ao%A@AE`FhC{Jg@vDMqJZLy)_cuMr4>=^VRCdKGZ-(92o=te)!&-sTe{t=5` zV)2hz{1X=cl*Itu=u5bMk>3~5jq(f_D&!qrtzol&L_PmDVepr!v z5=B(8Q82&yakGerK>8IVVWHJM&m=!kb%u?_z4{r*FP&dH(W5!TC>q~tmUu@gbjvhM zYcogrNV0!);5GXunBz8j)A9G`O|bKA^rjQ<&zmp}d0^wUb+Mbfymr}r)I+jaZ3p|G zfLX0|+Uzq}fuOUf11va#H2fCMRars`vL`_g{W?;nVX9-<45Ed$g^23iS|9mww^%ba zS|53wOyw`)%KKq`6suzqR>!U0R(1ytgiJ$h&!9fWULieW5a9@}I?> z0Ny_CjLQGd@H?jRZ)}A8ECE7MLnhU|C-#+tbp4%p24BaGp7%g2jvBg*anaRGp^vVi zlOM`((%cx=6Th3^i4ETwnCc8HLuRi&PxnB34i&kCVW-ug8Y22NK~A??daMfkgQ3FS23SNZ{6L z5<#{#0Q#+pw8N^^n;tvqxkJ%d1V>uP#wE?w+0XOupzyZ$d`!QP1U-xhx77WyfVRi6 z0z-FlL;MAavq{`mGroocqAZ%x%}$~P*@AWYc25b*1R?vYDo7?bTt#jjn}&q<3}Y?^ zCm{Ow>2t$!E?{z)2t9Ejyut<24LHx?dw_KrOHgs-Vi#SBarX-JIKZ;D0#W%GF3EB{ zy3+`A_z2HUviKbqzlFl|9DVycJR38@P4?0`69|OOsy{_gYaO(Rn4JmO<#tBQs}Twd zW2tAPGc}rMqx%hh*gt3CvG{Ek|ANK8VlhAkrMUVI|MSE=mnjS*ZXLc_hu3P@>>tt2 zKle{TH18n!yv*}=L@(!CZ2y6r{d!I%lfwd5pw^40J7qFPr{eJ zA0j%iv>ATchjp1KF#{@}4aG(E&-0tHk&{Ud#~5s5CM)yGcr85`m_vo{x5#Fl!pgfD zD9*mfCRUu^WAPOfePCe%BVw@lM{IjYY)5P)k-lcfi>Cdyy&1fd+Z;9Dv6|puf+ir)dMWJ z-h&J$gd=vbd!n`;(}+Fzi$Q}x}p2&1I zR!Ch>i?+^Lk?CK?MQ>N)2lbQdT8)V#$^9)hU3UPH?K(Cn%3PBfz<(t&{$B!Apa?7D zMd*u+cm5Y%>JjhKl>+hpRovwK0q+dYzkG*hW8(clayN*J_#i$bZe^_gVaJEdC9Pf6HQksQ+DD|2h9NW1^ndYS`=_?i)IdO=7tGQMiuQhoCv8b)%L*pZVa;krUZ2Aq z7cge07U&JH)vxnxd5_NB#AbMXN40 znh4-4wOgecuhvUI0aHeXW|L&_R?B-84t#2&PoES;1H8&f^39B%;LSyw+8An>S%{z% z3?=+9f?ONJ2MjBQ-{x80^o}4h90jI#Lp?H6)*+kGnJDu*%_cDVEYH1$!v9UKGF8+A zgn!SAFQf2^4J2|wRN)%d$)1%z3UR1jl9W5R+AjphLsGh3;M zF$AsIaCnKAI0R}`TLf@49R)85G zl)dBIB{C|{gthuuo`zGy%n9Ywm|tg}7B>iI-alc+3;XCw|Vfl$mdHJy( z@5ILhGay?mqyf&-Xk(%VS4XQgTaw1d!Q#uvBImSL{fVICMBgsFACt}Wl*}xLgOceM zzq)4(PxbqZM&#YR!D5?jx7?W4{&@cpOvBm7UgGoGHfGkgG3aIw66_$bf!tKj95?*3 zNW#+7=rviB;+;QYF@_>6Q@xZM2b#`k5L8jo9DqsvuQisdivXEqG?*l)wEdajf%je1 z)I=&bbNc(#lnjz5@u_i;wAm4~fC_+;#nfl4i#r%7&VOR_`MEKWgrVX5XI_tW?-RCr z`>m+18JNUDQh^m;QV|{s!)A=Ik(u-ZVY6|q^U#|f;TIlYagfE+EdC>l|AWQXSp1(X zf=0+MiC*~+xRB+Sad?5i?-V>)J7GxR`5HF+M~QP9n|m1phd{D$E&#O)m3=qS#qT{^ zyPCv%cc^`8KlVG-BkBP5_o##F5ca#&lzJ5Vd(~s=F!sCE5jBnd9(7bbj{SY=m^zOA z{py4|iTwlW3H2oQ52~ls)7U?xo>9+Y|FC*a&0t?rr_?O=d)4#mH1?C~j5>?`J~gM# zVZUFUR~N8+>8%{$A>EWKIkv2fCbG97T@*JPB3Edq2E++*my7B zY_+a-^ulslSNdf+4gIO^v^6^w$!!88ep5Q89h!>_YUILy)Ako(f|)GY?QpvS*lE5(@!B&azWM zXTmZj23fo-FP&}Jj*!@0kV|J_X5DwJpKGlhY?qq#>NRAFuXD07SET3c6C{CJ@AXle zC()fkPwwTdVvzve_n`sZbA8{StL;-w@OIJ1!UJ%L-uMve)~NhEf3a6O|2GPBSgK=Q z20c`|Zh2Ja!sKMpY8lZa&7Qn)+EH1RrED=tw&JJIsI0cGLEnP5be@>rj(WLjA9aU( zW@lzxe(MbF$#@=AWd!G_UTe`D8uj8F*v>F7EHhBp%oBmH5(_=v!%RrvYC0PdvojEJ zVeaLPIra4>$DhoIrjv-1p&5W_+lTE1UXC>@k2t^{6J<<4yAh~99tRbe$m#x|+FYd;NF*^U8$O5w zvX#KcCd?&RN0ftDpV#Nw)aT&u5BQ_(rQH~@;wA+Hnm9Bk-RH=2!iKi(hu@5f6mZip9)8yW->SpZl)m;hLS zuz0e2410k^%>4-IK)}TjlNODLs0m6q2T><&5zGKpb)4Y_GS{Op%@G9lR?pEKRj_=kuOwc?5EcX2)i`B?3CPr}WVYF-77MNjDRXQQ0~eGYov#OS5czq+ zWD50Ac%-3GzwH49if|Dqkl=g@BwN;sU`#QHGQ@DRO>%+?mWM=QnScP>goU=HlOb+~ z8E&T92ymze&S8_JDgA-NfjUG=)(*|7BZtAGb59r*IDnCKMd*PWgGZ-x(~1}6DC`=L zubL=&AagSY8Ev~r6cd}Q(E?=zz_1~2P?;bHaRsHb^=hZx0Qb6tWE~})tfNE#B5jUXo=ae(9n8DC ziL+E~kq@-lw6TO(QyPFSNy;W9)+rr1Qo7md9K`DJYxQe&tbNg1U2ODujT@}61#YrF zdj@$%*IL&Pm#QVq22{^dCo`w7*AF`N(qgrCRrI;;cAQ^F&RMx(9SLQj51s?WyA$zB zy!85UY$v)~c_mcOye}|FV^}T?ORQs(OBK0+v{49jGUSb)aoJ|Dbjj9)$)X4(Li#gi zs6unKcwt!h)QFA)7Tty&xjaG|)PsS~v8wd?F=v88XB*#_Uhi2kPxo>|bc;S(!U;tT z0xJINzmBxL3zZLLi&|yG3VJ({EX}N-H$Vlto7Y$ZCqp4v=CHXRMG+7|Da7j+3ap&z7`v^onKH7Y0cNq?R?uni#U zsTt)=8`ScvI4tMn7T=F14CJ%CjTrVu4a0w(im3=6;lhHv(7J|t5Y9_Lc73Vs<($Uy zO3wsgZU6FRh)AJTEYaJ>2Ad$KDf&4n@U~WGd8O59e+SVuftpRGyYv|RBlwr;zOYFZ zV8d5fRFMx^R7FiBL>2uIZyS3#kij!txLaG%F<1v;MXE(dcafvqm0@dv<-Lfrwu5wx z`Np89opF{|FjiZf?dIgCaej7(Uuddn1Arl?J8g-lQ)$oUG-eV&B`KN|Ib-3tLx z^n6jnliOp$QpeQ$n1N%uiYddGGx-By=x^ufO1cTm*D@V?u#yl|Vj*%c7r&uaqx`Wk37(!E6LLmHYdAD{6h~JI7QVbX61h5Zr5L$d)Xt@matgY(#HbOW?iYzs-VgSNa84>Z@DKy56A10(Hps}8 zT&2Y>O^n2_V0QsO`eGM-M{w;AWa!^BE34SSjYxbjJg`R2QLvanKN%*xnM_z18rAZ! z*Ftln0Ib5!q(hhq(;{zTFw=39hQn*~{NE?2^Z;4XYo>Rmy9fI-oyfBmCiD|nivLdr zCdTNOV*VP${nF?Mg^vb68vVFbB>kO63}kTx)1Mr1ND)|!py8N%3Nm`8KW-#+X9o1H zA)LK3s<3<^`i(5B3qdj#xo^uY|83(krHKpej@+{TRA@>?bAJXk!Q61*|`ab9pJLS3W9$FvdF3;KYvQR;o<|6DY3qiWBt;l+ZsQ zjIq&jOC2Bgi!&o zms)nZXn~iiJKu`Ewk?PpB9gEQ><#27R)IW1kfxAxfT9=qSd4p)3ZC?{|DK2l?1~T2 zu2|~NHuXam))6v1j#u@Cdv(OEPRWtjj`&^u_)>ht2(~D0-nI zLoKBH1fBW)mQVt#!@bDJZ=|l=p`qt9M0<~Mjq<7l84S0fmy$!!;)P!$CMUW%4=t7F zY4(Lom-rld6;jA^PA94#p#dlAPS7>nb7pjh_1q{&z0BD&WiR{T*IxT*_kNbaO}KzD zg}KjMw8xR#m?UMr-p$~oh(E(HgkyVK41gEw2#2gEipRhRHA;!bP#mAV3ujLnc0`me zU3Uh^k!i0S@6m6UManz+?Y2YEO&~&Ug7I-1t6{n}i}r~3`${6(8I2%l(5MGnL9moY zOBnqDYBh>5pT%cHAtC`9ZHL0#l|-~ThXAOrrAp2e_TBwNT#5Ly7{)(MLH~Q} zoX!z;8r?mkxRZ`*d!OG?yIJ(ZQuk1A{|uhy9sd214$4*Jlkhi4lIlqj@m?j+eJ-JY zN-x#(HwU7}cJ(NpP-7rFU+rO{68X(emlo$G?P1g}G90rSMw$`JeE8UC%e)?mLL{g1 z!6RXv96N_dnb7ifV3~!Bt5}hl0Y7!67Y7Jgy>vn?S}WrU>$5$i-Q)DAG?psTe%~Lt zTCY-lJ5Rulp-xID{v-42zT98K4R_E@zr}Wr|33j;SjP!K-X9f+c#)v@Bp_c116fNA zzx{BNGy=qdabY4Q;B9U)4qyZDml6T~#)bfY_is!F{xzUHsx6Y@&Fa7s1!9Cv*m4O0 zWYh!Nb+874J;DePeZ_a)a|{CyRp{!7wg5@5Kr-m(VA@|`+z-}cfJ(O}-u%*H2jR!a zol1=^SM;e41vBqU;gBMLQ*j_XDI%xFU;7 zDi_>D^WThtL&oZ10|&yTGV!r-qEAzyf$beG^jQc0fIqwY&^YNdXC==Jq=LI zABtd!SUHaL?*|Fb|9=OKkm;c(j6vXHiN_x%^o2&$J0ZCY1Q=I(fUya$MDVzU1;?1E zN>c{ijnQ6?5gcTicF$Q$PFSG?7cl)cDTM!bh`!fZ7{;2U8_G|irH()oU~tsnufCv zbvu)MLCCFHE;@DJ>H=%DvE^`44{@lJbBL;gH8<9Gb-ay9zk%z1z@3EMtdnGElo7U; zMbUrXoop&glq?Kt5sC z9{escU&Lr;-HXLi=nvxLwY9Z6vUjP{OnntY8)p?^&`^C)XTmN-xR7`IO7R=`9Y=sRM(R+r&l29o7zyjR*pUVpBlSZTbdmAs zg#^%{Ytc%eLx6RMhrqBd%rKKMihzI;2FmRhv2K^UQo;_a#WMDFT1+k7J?LV+*C^6` zb(pG_mz9{uV8WC29s1l<`o--8@QjFT;p3kRfKvp)$>9B={^Ep>SkOeqY}5$iRHf-D zVfbyhrB73AFI(5!=1qV?1c`LzS$1G6QZ}?~=+p{q5#qlj>QSPJ^$f0h)2&~ps`Wzz zke0So^I_YNS-#0*r;RF02HguAf=*$Y7U&GF?~s_G;z*D~yy&w&T|Fk`Bii6%o$*RS zTFC%*XkYIpKhcX0A;FOlzm38D1ZhottpP0#uq|mLumEkngXwgPPk?_}R!9uFb-q9p z_@k`}%!8EByaA@*4QsS`U@V?h2!A(<2OX$LOCU4W2onSj;D#1qo`(aMbH_N5k~1y* z+%@!w@z}QrGT70t!&mfqPr-_K9tElakBU|~;QO@GmaI9aisE)3<+xOHwKYpG z!SZWw1u@8y89FhYa~ja|GI$E|c^`gvpX~pVK^4I%GvE_6y^cq}*=Pja@o2qMTQsE# zOAylhWqg;WPf-O%$uNXcKhFzaVeykJgsguLzx-fwa?LHdskd4JruGx(!J@fh zWD4DtbMV1=MH_@^8^1JDku@?Q|4F8pzbpJ&A`EaY$GIt@8w}r~D1#PkG|k~#d>S?L z*k?V-M(&IFF1_xsnJGdQL}!H+?k#qm5AmE}g!qc+U^!S!0qGok7~JvbxIXHE`ALy% z21*r$s6#No4x}uid>8BO>-G9t#jbd1rnS1iMBMDcTWQWI;2%voz(5E9psc#a;kbhGN8a{P*+%28kR)wru$WE!BU{#nkJ!>1| ztCu_27{XD~@sHn5;ZxCsJ7!TP=^s;RFR$wnWJ8I&4-YsD^+Y!ojYX7cPSBI&wSPe5 z@C`^_`=N~y;oB|YPh6_FTtOZBcVfc+^T0v(UIUI|tl|FT95HkV1EELLAp^l@iaVU2 z>3=f6*3-$se2Pf?kl$is%-Fq{Pa&nVp5{ELuU8UliU1$A>XB=&ua`YO(C?X(gHenj zg#i|ghkWu8oUb%2@(pX3R%bPG0>&xbbbxd}8|jML#|xDTUS4gim6Z-VF#C0|-mj`) zrS+bU`ENtu!mKU~f?nU40^Ze|EI0{D3(h4R_#IOxqR@r__>?qLew$T3u{c zn~hqdTSwd!hF0vQvTmyhHy0Zpxr)fQg{hP`f|Saw)%BKJw;|iYcR7Iaa`GarEeBeg zOznS#mCT^3sNv2(8<%o=ab2nIkBP1c+&Gjj%18@R4lY{+e>~hDZStfAKhr!|@c%bB zh=QM#YpkvCr9+Y-80iWk{BKMmb`G-uZsnGWQ#vj(zixY1<{y z%mi1G+t(j6wg!tGpV80Z)iyDlo~8y%DL9MH_rqV*HB3${QC;m72C(}XFuBHC4k>WC z$i1fYQ>BCY*TL>C!;)$0wsE(xJy~sHJ`!6d-Us2-G611s;adm-sbWQor{nANyO77D zy@OB^D*>QH2w{GoEM<{A(hD9;1`gR|lytQ(TEHMS0T7o-ek)=eZ1`R_cu`AjX|jms z+N?|OioRx|H~frU_t%%>i}sgDZ51Wl@_7Stt>Fq~gYScJCH zf^km*!u37(6IRqbb$~|qTz?wce5DqSREaDL;P5nnBNEVn0e}aKIkqC<_J#vz+AzS9 zdw zOz`+71FDeS*#Q9BM1}KHN8+hE{tFwDAw$TqD6lB97-O-8#W;(tEOxQDkHrHl9%S(l zixP{yEFNKTfW<);hglqD!Iiq5<19|Hc!I^#ES_QUEQ{w@oMQ1ji_QEQ zA7Jr879V2qB8!(;yv!m1w}rX$f1#Drpa)qXsE|gg(y_vJq%0jqu90D^DwIRUk*$Sd z;T~Lh2n$XQV;d>lTNo|u!dd=p#hH9zzueDU`+Ex83KO_zE6M`aB;p%t*z6x!+-Yp? z4HQAPlFj77QlrSkQBY%Q3-%*wTy4d^s3z1l>_^phwFCPxwNu@L{T8)L-HZLW+O76r zzg69*?#F&YJpiWopk6cayw;RnUgT2Hu3rA5bgk-eEu+#BQfYNyi7#O;nNg|CJhV_= ztq8~d2$~dhCj2tSiIDOlXX3|V0P$lj^)rY_$6azxX)b;Oo=GGqSX*wd@QZ@Q<7v|6 zT(`B(>LO;g;hH}WYZ|^ZEPFkB3|tSB(iUeiA&tipbtndfd0!6~W)>(aKZKSH;_e0M ztoWL#5L?f}zKN}SKYL>owA23mmJ1PZmJ7Yu12}4Z*tF#rN&vvbG@>Q6CCvV3RNo(pU%g!JfccvaNE5#)N zuzw{Mu7v3rwmEDrRn34P+kl^*VD)Wd+H7$<^E&wXhlgGr`Y7)cxoM{wAU6%?zH{|B zSlohGI}C_pfTA{>cc2YBs=GeD2z7%rsp?4OhYUG&b+`lvRcTs4J-IkJ#TlDz#FkH& zE-rCeW(k4joXB|{_TUopmk?|4SpahwT$XCM2>%D`jn;${lT#9DY!pAQaJ}jef0^@V;FSwu!SOH@qT&R`rP(mjYgXzS7`D5f>n_)W%nFmcYIDX@il z?J6i9fQ4JJB#9Q0QO|G_n_XwRXx&J!V6+6xK~vGiv!zgVL~#T93+9`)OH9jVAFF`A zE0yW+&F0;FFHnm33i~?z!~|IVh^?XLcCa*s`3P&Fsksv~?8mI<)In%#2WP!350I)7 z<#Ef|LOYW-%boa^ zAmUyRYkj5a5iF2gJvDi3veZ2_`NU*t_0;5v$Woyo2Mpa4HHaT3~tkx^lfs3 zeo*+=RH{6o8_GG<=ikC?Y1ePWe^B0?z*Iw>6`EJvxCW~n9DOvfkU_BU4O3jfa`dEbSOW5PJhyy_WL1G zbSz@h_xE`^xj~=khX>!A&B2z=*Q5n?#yVi=tkn_s^mhBR?zBMX%)0<25G}hg5JSH< zfbn;Lkqz+HATuL1VBQ4$LSCQ4H)=b$35SWIUk^+`H3JFZh=^;#RL`F7G}{e?KibLd zQBW@u)WY`xYNno?;G9pBx+V$eo2@t*3$bCZuM|-Z1WJ-Q@_7pKx`l_m7|`~15gEpA z)J60N%{BnSz2mr?`f3KDWPAGOI-w$*5*-Kz{+WJKa5I%fAVH}pstrv%NqoUr@_VRJ zhOUxox%<@1Yfv_fZUJ{F=EiF^k!ePo5Ku5NL*}VE1sLu#wv%bj&Y4|;f{5^%#;gPf z+8)eY!NOMF;8;sEjE%s#LkJB?sjdb?+v^0J3~fP5f|o={(FY?yLM0kVpXdXm z2)RI~);`yek5x2n_*jv1Fhp3qlas@RNYdujw@sUp_AlQhbJ8bvt|mNDuk7!SGi z^;4ZmsC-9YO-1$%8JY-$57W^}^>B?UBGM{UXRCm(%(f|&N;A%Esa={m>VQEaUtzxL zlyFQi6?iCD1vSW^J3Pc| z{aU2QgNN=Q1KoS%$i)xLzx?X?BS&b=D(7($djhb1bG`1C$8bX5fz#a+e8z3xY}RMz zE?t5zN`CrItPfVNRkc^idZ}2KNsc3cu9NUN@{qi(gJ7)9VLmo0jpa4Cy}>%x>$L`y z3&Tz50Y=Y6YT{233u5+$ZcVo?BNWJM^p=R~FBo*hlFdUMpxp`yIf2ZX~0R1k4 z{Cq!FepmI*Pi>1^+@2=se3(S^DvOV@U`AW#O%@+#QDN~37F8CDES6ZTu(-k^7_`YR z>b1@*xNv|C;SnZxrwik-WYN}zl_6o{;t~JGw!q$N|KwpKcT56;RVvKhZ@T+8EI?@tn6UO@8R5atM^K)eLu2pH{&6K zhW}8c;W7oeZ^H-GlicJOaIB*Z;sJh;5UNMe`Xa&)A`tkOsPOK=pczd3 z^PvL3OUH%s;DWwhZ#XKn9>kAC;PSyf-9Zz?T~!7iu7|#s(GZg!u1o{rzS66diluH zPe1X@kz=LT7A_oldUCc~49<0Pa!B+8Cnr(}a`usUo~JtM!<^ooKVQS{PDMARM{59CP?w=1S@_{EgLQ z5v(QtoFM*8G)D<>vFtfJFUuO#P=ozr-4go6k%&UjS4BHv)AM>DpORHoMD;C%F zqexHaYchJisPT=reGSe3Offqi6ydq?yMK=D&~9Hwa$=QD$v)E;ha1;8-@6dGXrWDh z=PHXe7A+R*Ea+=+To%_@Bq^I$ap8OUrP~3GLI#uRvY~$khu0(6CB^X1T|p66F#|d& z!xf|YEDm&cPElFDc^sduk!kwIq>zYe+{CpbKEwWS4tR&sT4PBxm-XdwFi*W3QYqy|D1oi=|gy zJNx3rxzgm3qeovqF?aOnxrKB3!pZ5!OAAhQ&22E{Wwm+q==u4{qMsQ5`t|G6*H28h zoaLhnuO5B%{M-?SPo8}IC?*})N2XOA5PZ*>~iPEF3~ z@c1JO7$Q&lq1dMod3d8O!FtcZf}t6Hr@cfbUz}+-+88vS^%G;3Xl#GB^g05$F(iWD zYNfGST4*`#7EStIS`8Hw+)LT2=S}y#N5xD>mD$ zXPb4j-BHeIx4yhogVF9m2unU- z*`8qG6Hn4Np1IMNNwaSOi9qr zXk4fK3F2*a$R(K9$3U&lb#~)2etFq-_yLUx563156}!v4&+=2b`4_3PT>e@x{b0W9Q)!COhz6?yZyn=ch+- zs^H~y_w;hAv%2m&oU7&KnysqRi>&Fd=gTDy--O|KdEQ-DTr2`))@MloU2_E;A34R% zfsP<;-~1iRixD&`25*@?7lwgw^+9!5CgRIZfsbY)8a~FhB9Zk53fT}8uqu1b%aS2^ z1)ntu1cFIQMw{~TxU13I&|UJ8{<9`Nwte$ovvt&nhLH13TVQ)y*7J~U4j;7Y)U!Bf zLuk|D8jf(q>=sGhpd@{RH7H?I6YUwd}m-GUkr@pxcu(OrLzv5t{-yE-HK@|WJ> zB^cz0>+&`mPzKJ9MOW6Ay+Q@#!iUdEBJRhU(Y@fh&GYY+TA1- z`0Mi_tGB89-w<4S0wC8okdXevR787#)~5msI(|Gip@cSmfbs!6L|DVh#t6s?bWoIr zB1AWp0AchK-vc7ECUES>)9{!n{QW+~V`PX$f6^Wk9y4>4tYPPmZH5_(3V=H2@eyC$ zivo$1#R|ee({d5nkDXc_+`3*K(Z3M~?A#&bs0x*L*04a-B9i8yUcDv48iw?}$6pBvraeICN2X6E9r+zzgRuEnqFWmls~1d-+A@&-t*{jLocO^9!g) z2Tr6yCUwADC~oC44|n-1-VDhy&b9yT`EgOc>JH!-Z~R zOVV+Eg2ceK+f~GSC=S-zP(8L284QwOTmk0^anm!m;8>Ns_&sR3oBAV2vy8fzj)}AZ z=;F8y0aBr$SUB|QA*AO8z=zu)3=mZ5c<-~c6s5%>mudieO|9%B|t;4Qp17E8cO;BLPifL zNoeTvkMxI1=sDYTGD2=5uD-jc=}82{WKDNcD9scz042-=Fl*&j_9}9oNl2IO=f2=d z?DaaH&n~Ngq!-{5h45Klq-`G5$aP3Rp;!|96on16PHP6jA*en$;_4P&L5?69clGqn zZwB4=FbIcLAbhuREfGaJyiQQWpaA@6z-M5lfx3r@s$HnS5;5j=j%CKPhU;M!WH6|HZ`~qb$ zznV^`v7HQvhO220T}{~Xa!(DH?H@Ja)7ac^K@r#j{;ZdBL|>iswy|$q>MUZ$XB!#M zk@?*A1F`^a(hoRoCdvZB(odLx)_kGXSrs5V-`R)fzz7aB+s|>kRINE*{z3^BsLtL4 z%Q%OP{1h>p2-jGxHaP=JYduKH=Np?HRH7AJSq{x8xDxC}?adqJoSVKQ9$2wCtghH< zDWc5l&XNLkfTu_$Xi{m$y`AU*i*{BFs_Q)hE<8ImH{apXqD&jR3E;^Rg&@<dlOe=tykw^TNkpQ@pk$K6iedtH^j^b)V9fKGJkI#W zJ1$vn-wEQuyx<`ykNA9QZh-yJ9qN*>1tGAhgHUnG+igUU%6Oggc(GE3wz&zN%drj@6iUxY*0VHnRd7WO$%e-B zaxasnKxrlZ<=$bLHJrZ174I`lyP|2s_L$;)=Mj95wd0y~Z>wA=_xb9n#RIidU;e@* zqc5wo9fnx;zq7i!{#@wpFx%-K-RLo&gPT{0(2RcB3kwUcU=iOl({nGs^!%AqXAjJs z`toNcXV3G?ET9JytMfU{3I8&7F=BPT59dB@>GY)yF$^um=^$1BQTo2$&BUN2(5~Sm z^ihWFq-Gwp)s(~qTOP09Ku})W*MZ8T=9to_JZ280)5V(L=~&@_g!_NuO*S;7+NZN= zN(b6Q+q^=x9%vu zBy=~%xco?o9dT$O@U4Paq>yWe(%#x@#MV(a0jS-pjdes*iW^84M!=;CF@|Z!dfjgK zn+bDl-~?F=p!5XefiT`grSBcs7X;W0JPpIT=DM2&vc#+w-#v_-AL#9kp0;51z?bjp z1K#h9fp>1h9&ErnpiT?jh_|E!6jBD7rEK9>k~{&leh1zn%#WEDJ4*@vy%wkuCj8k_ zNdmu0B_U6J22o(RN?&IcGxBCj^?JRyw6rt=|D=CT+he*fOMOMwhfFug^%W5w)x}l& zY*Kx2%+UhmOG_~>&`{tNP$?1}F5Mgy5Tk>r{5vbMe7DPq<36@vRK%BK@)DvzK1Zx* zb5Bvr&e=YA*<^(lB@ymITWZwfLvaaF=P#+2#!^#ZE+cCSvVUR($An*iDYS@C>aPer z+n0h;2k1f@iI=}ZV4#Mf?u#1%K}Yzwp~1&eXRSsHK!SA9V@qcmU*~hP&$Ca3xtWZ_ z^8)0Xak+7*&2f%JK>v4t-u5vV?l8qfvIZuwuuW8DLCUUW?HH3dh<2@Z8VjMqoq?_u zsQEIBOWWBa;qbJOd0C1KhX5Ev#j!T)kUtkrSM8!e)TA*ce1UD|2TJtphAr-2QKZFI z8KhiHZZV`}c)SOO;%9&8<|Gax+;lIucoSH3ew29MsO{S9XOTp=nHF;u=7{*~kp@es z-yjsnp0=$d$Shuy?Ngi?ZH&~k$1*^gZH6ifwLK?AwcO?M(a*;9$J(PA+idI?Ly~a+t!Ja71AV!&KKe2 zMgB!x&4UKBYV=keMP%*2NC!U(u(!BkoFLseJ_Rceep{{#ARnn`cKk_GEY5zod`- zFglWpmt`bRp#EvsrdG|792wWBaAlcZro*a*A|1MyqI*Te_J^?1|(88LZ4` zEGQ%wK+d7LS9_C1 zFZnDFKY}9Ct-sOU6f=50*B|1N0BEfZM;#0nNSL@JiX_G{h*{*EC-p0#UY;*dMejEX zOhm$l_0Lc3h`T`kBH8s{V)5Ux_#PJD%i{Z4yu;!LS^N--A7=3*EWW_vi!A;ci=SZe zQ!D~#$S-k3U&g5)~TG?WyET-3^&X&fYTQl?g8 zcTi?;$sXR2Xh4MFNF7#;7>Tg74d$BaDF-Hw|q3W>X^Faj6?XAYZtU<1Htacdm8F_3sUOw4RFL?l5yIk4}U zr%oIbPuaVK62@yF5TSF1(OXpn-q_9>le{diF;rCMA5?4byoQO2ypgn;*84sk0fQB*PxpYKopSbL=zu6u4AafvZEb>QkPgq0q0JnQdwF_$H9G-38ivxS*1?(Fny@PRm(=WE+rBT4a(Z6X7s#nEL* z{4Gue8oDvvHo1amQad;F<^z82VU@a?b`HI@i}8X>>5stB`PMF`s>4U6=e@j688Mah zM)e#UN!0^UF0U6%dbviszDi*(1{j7_d#S5l>RPC;$dKeKK=mU*Pi4;V0s~QAcuy^# z4v@>tFm!7=2Ur|r!FPtf!}eKkTMr2L^V0~%LdyBac&z*A%zCMYiS?2s3|=VwKyEM% zkW-U-kBLq)8^8qm?7<`UPXAebM_%^`T4eP?4KmREhx|SUx}(Pk`1oc$nvkB66yl?W zevCHu^6MqWCo@st+<-9zqjx7a1}$^A#DMnm{DyzRB8l6585hR+Hj)}fmfIYjNv8Lc zxTTlgX`{zPyPV2npCePkKCebp5&K~^3ep?%9^|KE(T>J4=SE(VbhNOHNa6&{yuvHx zkF;+4@keohzKEWTeoI~m*~I=!fep@PdMZ~5$57CQf8#n|RD3aU9ll8uhtC9s zDCVO8mm#FdNKOOo(*c5+)fww;Fh$Oxfl$(1M5aPmfDu?^e!?Wk*Yz~lU<#zHbs&p1 z+(r@>Nq|a zVTSWeTDi`EccsVmOjhVSlAKUh#Iv2Ym<8aV>Nso27mF+xl7v+f*{C2zp}^eIAc=$R zW0;Q!D(8_jPP7ZQ$NzlPZ)ew$Um+HK`RZQcCLEdR;3N>eP& z=?j|d$SasR^`DZ4gqmV>M}^bp(7$SLvLxp>Qi#sP2b1U$zVs~HOu0UkCzH{%h$KNK ze4uZS4bfskt(H8AHg0p+T&fjx@Ub_eSNi=~Mj+_Jrayxev&hR7NZ*{Et*yH&d#bw? zMKA*vy>+$-wnvRoLkl&T5Dfv8)=dz?+Nz0LNgx{VLV?l)Z4Nul9T>zb%;a9ckvWbHENBm}~?$Vbj4 zG)6kAtQ=F6nTU~9WAr>%Xv_EEPA4fS=|NmrA(&Yw){1#$Q$W_Lu5V_vDZK*Df+^2# z8v$NedDlp)?kr4DShyG|V`gO?DrwpJ?8UWGmC0kN+Aty^eoG0|L#`faJOztrour^! zLvK##upcI~7p)R`8NhZCJ(za>2zNN!Pz->=*G6VI|M|wcGy9egi|mkInR=ssWP-~F z!NRXQva%NSCx^2@9Wo4u+KZRL)N&6=xHSx%206uc7QfD0f|i)^3|U2a*uW>=nf$?f zA5?vU?c9Y5hB8iI1u_58R=J-^kYo}VzO=eMZ`)PtyZyLw1HjQtK|RM?CC zPBp3aVSkU>uO7jEmpY&hVt=nXq^7XntsYg6VZTQmR!6YEPfe?%*x#=nSI4k_Kpj^n zuzygUR8L_4ka|)*h5f_oY4r^DCH1U&4*R`oMxDZbQq8L8vEQdot25Z|S7+56_K&D@ z>OA%b)CKhc><_9J)J5zMsSm0TVLzo_R4-xwsG3(VWB-_XMSU3i!|GLa3Hu{zLA{3k zwEBp89s8r|qv~VWKd#E^4eXDp%j!++kE@TX3ic<|CsY;tlWI}buzx}+Rmc8GwWOA@ ze@d;W2KG;@E9xrt�za#r|2frdrrPr`FY5*v}~B3Bdl8YNIcAya{q9W?{I5iMKUm zDyaS!uz_MAYBx+aacoe63HsGfB0nO{#>rVFbT$X(>z7|W=^Zd;lk|#MHLc$ElRPXo zt1Hutb&JQoG0Dc(t1tmGZ+@)N&d#eFVvgO7S&d#j(ih6 zPa;EtzbYWIO)z%^*W5TYxl~=N-Q=tNs|_qSi=+f2}Gs50ix#*#mUc7bXc&Ogvx?U_^uyQMo+tjSfIwZpv3ZLGG|t~Z*HbBLH)tGEB(&aN&vt|KekGty}EKawrS zv7LBUvcYmBB^sHKbz<#q9A^Vbv>^#wDSL%$q$|n8%F^Us2}_q$sKPt~54^B16uj|5 z70UxJ@UT@&!3$OV6l_s7EKmekcG(mgcG&>h!18_Pboc!qk7Jk4y|?e}o<66~={|k> z^lAHZ_;UHKI5XoExHdv^7?KCl)HayKjMgPsY$BPT;YKTz{Q}g|FBy9>SK_lux6_wbb*v7IlChK9oF%sRA^^ zlZW`AUEgl*piUc*2oXa(&EsMD1|kLH)I(*j=&SjywZw;fw%^u&`sK{$`w;h7x$}6A zK1UQ*l^w0aCL*r5Z}G7FBCPD|rV2`tNKBzzV`)zM4Ek3NIi(20U!<;2;sIXJoWwvO zf=N!*A^5ZnvzJ@Y_%P7XsTp_r*l)_z#Lmp3Az4OaZlL@fs}>zQuGa2&M{EM4ur&VT zjGyz7m+=?84xq+P>e28uI=+bw1iyNm+A}A+0UU)9f10vD8n>*iZVPxW!O9Rs8NT3+6-QawqX=O9FV;o(>?@MYQtZ) zs6rcVq{o~E6M0{TpqP;~(s#oPc6_Q|4Cd@FeNmnvTRkPcT}aUxQl4go zt*Pi=Nimk=|6LAuOeFBU+uS+)&6U++5 z&9NQS_dp`6A-csDdQh>}eA~9QZQOnYZCSsQZeDc=ju3}IgqB8z%Rn$gc#G}q-jki|Hmv9IKvw0rx`i$a3x#D_7{mfBurF+J`wT8&l1xGpS$Yjq1p7Yeazir;UE(%up$jFeqpJ(s z2uQd0brM2*wMnciz?v&Zi)D!+U%SR92^YC8KbXj<|LhVl3)M@*%0+(r ztD~~%v~<3p-mzZr%~-|7gCxrdx%nIlY#TSuKW$JD<)Acc|!C~~DeKTbOF-!NNKVoYI4F-B5w4)9A-_eFrz~;pt*C|C}BDx3-(D9#^ z9&x#zA%&tp61O>6(Q;BHtC9txWjx5VniLtD2R?|kepbH|%EZ`(ITE24&G4UAVe(xrK7LdddlfIGM&Qy_pnm03useW!#PLLn}x93(zgv zN0qi-WujZxhe%Hs?*|Eh-|6G$7~2QXdH7X=K>#x)^CR(f+B?3LwOJYO^OT)2;S907 zP|x{@WK%yeF3O4aeme|qS~g9qLrXwM%jL21mkSP2P#K&wTt@&D$CV=6n=4h6Jn(L>o<_VG9d;n+V%nD@$HM2sFqB4R)BrDo{;8VJ4 zKuF=!!iXTpP0w3}@yWGYz3qP*Kds{U0`GV>hd;y?+YaOd;cl5j3nj|_2~y=G!4!ZUtfXIZ0&c}RZ!)4mb(*)= zu)X&V&m@!RrNJW+^MLI&-bmcCBAr1dSArBMA(~P`pobCyJ(v>c;gzF-1?gQDkE(5Z zXObE&sbRlba!E^O&So+-lyuTD)U})jA|}fU0Tvz~A*oIiJW6ne;4Hy8g2xCR2k4=f zP`ymbHQi!>s2DKPoh9v5FDr3KEYQnxawh2a&>iBKp_*Qnj?D9tA;QsA;(fz>=AS3J zW&aaIeVSmVH~dv+PhcUMCc}WIC{Uoc=jL@ZAcH#^Oyhy09JL!e^^j-kIFS_wI{Lsu zu!tBOO&CnEY9g=^j+n$c*FRuVmE{rJ#`#kcwNA*}8WH+!p0Z>k8AW<~nGuLtDp?I&7Xs%yraU$K>s}K)Js^W(I~D4ykJ2 zl78XZG?g+;v!=iq;ZR$qURY3E4abrngoDwmQw*+uOIG?~tFNn_()MCFCGTPZUbqmu zYLYbwzgwSO4NuG0n!K|+AA2*IEkB&Nl__`PR;J98CMj9oR68%ZyI!u*qnrcsZJcZX z_v|?`0JS@AK~V$0g$&uv|A-cCqMc{w)#>6Ayup!Dc;gGb^M>-sfZ-Kbo`hG1%yrmY zNA!As$ne1EN;qkF;N5G}I!1sE*l%TCU>crDTSD4|vPdj{I*B<`Uom5?$<{?aT_~Pj z?Ub=Z9FE##E*!6h2_FQ^7!pxgd-0O-2eknM&s|KGeH779&()aKI~zTco&H@35V3(Tv9mHjP4)d8@tYp zENZPlKH8JJGltH%BE|?1CSWh%0mZM7J80yz+(9X(L^Gf<{|k>4%*65vOfU-F&V*53 zDr6Mb>%*Q*uSYTkSYu?Xibfx$x{s^&Hi?s7$$Q~Gw#Q|r&qfZT{rtgG(s*R`6@{+ zn|)kYdWwQ5Ztq>YPE$aF#I$@1iT|RI#68I(WKiU8%o`Vy*Nogvm7ghQ6S=@1HVF zebGqyR($uW_hU`LTma#Eg5j-(vvD&;E3cuWepzW|02e!{DyNE;DGx|3>= zV@)u;iwEW1aMJZGQJQ62q*%7qTV-3vvHp*ccb_7lZkOO%lb2kJ+lXt!VG$mrEkGhh zv=cK=d?&s)4=Wpts)G4yutnM69$j=tC971} z?g-D41`krmO9iX2?4J1dVA-)hkdPhDtVDL`lW&z$T04bbxqrsqJ7ttyec$d0hl}-S z`xf16-Vq<4)o_3h@bC$T%ndwvqWk>-qj~)EC^sB(Z0=1zRw^8T9?;F`&W2bwbAWC2 zw;8h!A!~Mii3C;aAkOc=e-LXS-}F>eM1o{2N=Qa4YaYo&jOCPX5p36^Z}DL2*g!Ac zLrvg>H_2T0t48Qgtc3lPSnbwjNHx+X+{rgM&|AI;D?(38x-Nv*mH}<6MgdgqUv zlK=wx>A&31{T2=#n_fQL2vT1sT)x7y>=e>JrFo(DA~(!SL?&hc=Lr``72Efp@D+_t zH!JIRc7u(b^h%b7A$fZrHck&Ge>c}*VBK+(YdLiAOQb#iKsAX`Y-6owG~KSHIP#ilDc+^#ES3diVRngXi8H#%IxTdKXv<(quVEFZb*&t=Vwo9sf$0%U3 zAa9l@qC-K@C7%KBsoB2XPSAI8Hd>(x2i@L~;tWf(HwTO(aPdc1X2X;C#nf*-#kP!E z(5;K;sTk%!%e>VcH70{CXh9`g{5Bhr>*Y9+iK{zh2XR+qRF37(Z$vxd_1=iGu8D5s zTtwdG6w(C*{+N_d8INId+r!Z-7~4E-m4!0h+}ZmB(njs0q>4FBAtT{k%W(9egu4#5 z*sS)b;jJ|Cjaiw{K~ASpP=UBAPVl2hS<7NI{0ecRz`@j}r$c1$>6+i+LtLEIlR6c9 zEVeiG8pKWGX0noa+tC*f?y$Pb>s+%cE9xc3X(`Z|K?&-eBlAt;f53rAJMjy+vr8r7 z*JRvDvk`~Y#%SCS5Oo^1UT!@7s%y+LyzI(FcE(IThj5|+lZ)79Z5NRTw{$lz-3j4> zFuEW+6zZ{MzHn?|hmk3^O=9;3JuAx>yV&;{Ak=khvxOd5+qUKXDG_1|h)<~;X_xJK z^Rp&!a*g;riee+`!hD32bsCj-&V>|D$LXRhZcEr#=o%IgW2-am#rhNVI}N`NtQL;g zN2Ik?B|5E1GrHW~zzbvZ4+r#38-q1DU0t4BS69IJx=XC-{VaxRM6L^3YtZEo9!l)| zyZ)b(RQMB8@lc-d$pR&{sDylqy&j`j%GeJJqN5DnQotr4O5X7?ZhoCo^3ELzj+i5nu~!=v-^<@RZoE69dqe zy4{}99TdOqtvRCx&sxDKS7+g;zmT>Y?9#&#Ge=veXTI*zHOSytV! z3(BDMPca+{JgibV9=?|4gSkWhc`$^39iV$9b+AqSq{Mom%caa3z7eZ(!C>1p|HC6N z05aoW&q!R50VcN+-1(+xd@F{=kL7dE14>9B(t{R6ZaI*K=-vkBbF1&|x3^pC*rDyc zAFKIOz12L*oTzk9Og=#-r-X~y;M%6YDmWOs%_fhnn?_%QOYvkYO-Un{--%%&vG}rM zeL>ELN24leEZ0M+eWwrG*4DaLT^%^n6`*e<4`|LP*58dG^QqpNrLe37dMH;d7}l}c zcE-k^US@zYj=+&eoUoSbM0U>LDq=4PH~pVL576o7mc2E-taqoulli=uo>T}$=#cpR z7!l;a%Llh;hLEnXk52`EC5&1Gy(oQhIhfH zkRD1&Anb(m652njI3!XMmXrQDwtk52X72EzGzvA0SV#0!eSbg(>QP&|YOU93MPa38D0CZZ5$Gu+T@#nH6ps#2jiGIQSRUZ;|Z~g z<(3_WoIxZLY>H*b{ov|e|1TbY`>NSjiEejEEWgH6W}RE76A9dOok-vpN%L=YVSRB9 z6UK(PUv^6CINN!h8#nHu(%7y?1tVtq6(7SZOq-;w!9zCpzrjZ_d`8w! zomCo(fYS@CK_RG;;^K!XCSV-&(GNe&f|+GuDJ!$f=^!JF0^)PKvd)12nU8dJ_rFQ- zKEb~e{0G5@1ph_wKLkev|4Zhil9z#f#4FsEWr~5 z^8`;4ED|&bmI$s8JWcT91kVtBlHe-Aa|G82mI*#Z@M(h25WGNegWyGi&l0>u@RI~9 z1fL^#jo_yUUMF~i;PV7gYq=)#{}%Wj(q)~#Bh^K$@|CMYfQ;EbDAC&)W3(*)`vq2T z=xv|;M`#v&!3gi=>ST2cG2e%(kD23e2dXFVri?OJ(g5!GRr0?{kvhyi>pz1UAbh{K z=A*Lacw#v-{u&?OL`83w*Mcj;tv82pw_uL7_bG%vrBRcO0gGn?9`k`49Q_Hveeg9V z7$zViY9Jmi{%bTIy2VM*AT5Xr3g*8)t&jc*06mmCXfd!FdoBMs+2upN8F}TcS6{(_ z_BY*9hJtc%R?{f{zGZ z<1c=Z;2y!R5&S;E9}zHuj?go&u{+i0?(TZ~+Z)TILhyM29Au1TF#QjcZQnCvwZ4%O z3=^f=RPB82*;=*sx!Nxrd%8APo2)J2=@Q_n+C2WBu3g3({{Q*fe(i}`h^Mo)EBNC2 F{{!8026X@c literal 0 HcmV?d00001 diff --git a/venv/Lib/site-packages/bs4/tests/test_builder_registry.py b/venv/Lib/site-packages/bs4/tests/test_builder_registry.py new file mode 100644 index 0000000..90cad82 --- /dev/null +++ b/venv/Lib/site-packages/bs4/tests/test_builder_registry.py @@ -0,0 +1,147 @@ +"""Tests of the builder registry.""" + +import unittest +import warnings + +from bs4 import BeautifulSoup +from bs4.builder import ( + builder_registry as registry, + HTMLParserTreeBuilder, + TreeBuilderRegistry, +) + +try: + from bs4.builder import HTML5TreeBuilder + HTML5LIB_PRESENT = True +except ImportError: + HTML5LIB_PRESENT = False + +try: + from bs4.builder import ( + LXMLTreeBuilderForXML, + LXMLTreeBuilder, + ) + LXML_PRESENT = True +except ImportError: + LXML_PRESENT = False + + +class BuiltInRegistryTest(unittest.TestCase): + """Test the built-in registry with the default builders registered.""" + + def test_combination(self): + if LXML_PRESENT: + self.assertEqual(registry.lookup('fast', 'html'), + LXMLTreeBuilder) + + if LXML_PRESENT: + self.assertEqual(registry.lookup('permissive', 'xml'), + LXMLTreeBuilderForXML) + self.assertEqual(registry.lookup('strict', 'html'), + HTMLParserTreeBuilder) + if HTML5LIB_PRESENT: + self.assertEqual(registry.lookup('html5lib', 'html'), + HTML5TreeBuilder) + + def test_lookup_by_markup_type(self): + if LXML_PRESENT: + self.assertEqual(registry.lookup('html'), LXMLTreeBuilder) + self.assertEqual(registry.lookup('xml'), LXMLTreeBuilderForXML) + else: + self.assertEqual(registry.lookup('xml'), None) + if HTML5LIB_PRESENT: + self.assertEqual(registry.lookup('html'), HTML5TreeBuilder) + else: + self.assertEqual(registry.lookup('html'), HTMLParserTreeBuilder) + + def test_named_library(self): + if LXML_PRESENT: + self.assertEqual(registry.lookup('lxml', 'xml'), + LXMLTreeBuilderForXML) + self.assertEqual(registry.lookup('lxml', 'html'), + LXMLTreeBuilder) + if HTML5LIB_PRESENT: + self.assertEqual(registry.lookup('html5lib'), + HTML5TreeBuilder) + + self.assertEqual(registry.lookup('html.parser'), + HTMLParserTreeBuilder) + + def test_beautifulsoup_constructor_does_lookup(self): + + with warnings.catch_warnings(record=True) as w: + # This will create a warning about not explicitly + # specifying a parser, but we'll ignore it. + + # You can pass in a string. + BeautifulSoup("", features="html") + # Or a list of strings. + BeautifulSoup("", features=["html", "fast"]) + + # You'll get an exception if BS can't find an appropriate + # builder. + self.assertRaises(ValueError, BeautifulSoup, + "", features="no-such-feature") + +class RegistryTest(unittest.TestCase): + """Test the TreeBuilderRegistry class in general.""" + + def setUp(self): + self.registry = TreeBuilderRegistry() + + def builder_for_features(self, *feature_list): + cls = type('Builder_' + '_'.join(feature_list), + (object,), {'features' : feature_list}) + + self.registry.register(cls) + return cls + + def test_register_with_no_features(self): + builder = self.builder_for_features() + + # Since the builder advertises no features, you can't find it + # by looking up features. + self.assertEqual(self.registry.lookup('foo'), None) + + # But you can find it by doing a lookup with no features, if + # this happens to be the only registered builder. + self.assertEqual(self.registry.lookup(), builder) + + def test_register_with_features_makes_lookup_succeed(self): + builder = self.builder_for_features('foo', 'bar') + self.assertEqual(self.registry.lookup('foo'), builder) + self.assertEqual(self.registry.lookup('bar'), builder) + + def test_lookup_fails_when_no_builder_implements_feature(self): + builder = self.builder_for_features('foo', 'bar') + self.assertEqual(self.registry.lookup('baz'), None) + + def test_lookup_gets_most_recent_registration_when_no_feature_specified(self): + builder1 = self.builder_for_features('foo') + builder2 = self.builder_for_features('bar') + self.assertEqual(self.registry.lookup(), builder2) + + def test_lookup_fails_when_no_tree_builders_registered(self): + self.assertEqual(self.registry.lookup(), None) + + def test_lookup_gets_most_recent_builder_supporting_all_features(self): + has_one = self.builder_for_features('foo') + has_the_other = self.builder_for_features('bar') + has_both_early = self.builder_for_features('foo', 'bar', 'baz') + has_both_late = self.builder_for_features('foo', 'bar', 'quux') + lacks_one = self.builder_for_features('bar') + has_the_other = self.builder_for_features('foo') + + # There are two builders featuring 'foo' and 'bar', but + # the one that also features 'quux' was registered later. + self.assertEqual(self.registry.lookup('foo', 'bar'), + has_both_late) + + # There is only one builder featuring 'foo', 'bar', and 'baz'. + self.assertEqual(self.registry.lookup('foo', 'bar', 'baz'), + has_both_early) + + def test_lookup_fails_when_cannot_reconcile_requested_features(self): + builder1 = self.builder_for_features('foo', 'bar') + builder2 = self.builder_for_features('foo', 'baz') + self.assertEqual(self.registry.lookup('bar', 'baz'), None) diff --git a/venv/Lib/site-packages/bs4/tests/test_docs.py b/venv/Lib/site-packages/bs4/tests/test_docs.py new file mode 100644 index 0000000..5b9f677 --- /dev/null +++ b/venv/Lib/site-packages/bs4/tests/test_docs.py @@ -0,0 +1,36 @@ +"Test harness for doctests." + +# pylint: disable-msg=E0611,W0142 + +__metaclass__ = type +__all__ = [ + 'additional_tests', + ] + +import atexit +import doctest +import os +#from pkg_resources import ( +# resource_filename, resource_exists, resource_listdir, cleanup_resources) +import unittest + +DOCTEST_FLAGS = ( + doctest.ELLIPSIS | + doctest.NORMALIZE_WHITESPACE | + doctest.REPORT_NDIFF) + + +# def additional_tests(): +# "Run the doc tests (README.txt and docs/*, if any exist)" +# doctest_files = [ +# os.path.abspath(resource_filename('bs4', 'README.txt'))] +# if resource_exists('bs4', 'docs'): +# for name in resource_listdir('bs4', 'docs'): +# if name.endswith('.txt'): +# doctest_files.append( +# os.path.abspath( +# resource_filename('bs4', 'docs/%s' % name))) +# kwargs = dict(module_relative=False, optionflags=DOCTEST_FLAGS) +# atexit.register(cleanup_resources) +# return unittest.TestSuite(( +# doctest.DocFileSuite(*doctest_files, **kwargs))) diff --git a/venv/Lib/site-packages/bs4/tests/test_html5lib.py b/venv/Lib/site-packages/bs4/tests/test_html5lib.py new file mode 100644 index 0000000..b77659b --- /dev/null +++ b/venv/Lib/site-packages/bs4/tests/test_html5lib.py @@ -0,0 +1,190 @@ +"""Tests to ensure that the html5lib tree builder generates good trees.""" + +import warnings + +try: + from bs4.builder import HTML5TreeBuilder + HTML5LIB_PRESENT = True +except ImportError as e: + HTML5LIB_PRESENT = False +from bs4.element import SoupStrainer +from bs4.testing import ( + HTML5TreeBuilderSmokeTest, + SoupTest, + skipIf, +) + +@skipIf( + not HTML5LIB_PRESENT, + "html5lib seems not to be present, not testing its tree builder.") +class HTML5LibBuilderSmokeTest(SoupTest, HTML5TreeBuilderSmokeTest): + """See ``HTML5TreeBuilderSmokeTest``.""" + + @property + def default_builder(self): + return HTML5TreeBuilder + + def test_soupstrainer(self): + # The html5lib tree builder does not support SoupStrainers. + strainer = SoupStrainer("b") + markup = "

A bold statement.

" + with warnings.catch_warnings(record=True) as w: + soup = self.soup(markup, parse_only=strainer) + self.assertEqual( + soup.decode(), self.document_for(markup)) + + self.assertTrue( + "the html5lib tree builder doesn't support parse_only" in + str(w[0].message)) + + def test_correctly_nested_tables(self): + """html5lib inserts tags where other parsers don't.""" + markup = ('' + '' + "') + + self.assertSoupEquals( + markup, + '
Here's another table:" + '' + '' + '
foo
Here\'s another table:' + '
foo
' + '
') + + self.assertSoupEquals( + "" + "" + "
Foo
Bar
Baz
") + + def test_xml_declaration_followed_by_doctype(self): + markup = ''' + + + + + +

foo

+ +''' + soup = self.soup(markup) + # Verify that we can reach the

tag; this means the tree is connected. + self.assertEqual(b"

foo

", soup.p.encode()) + + def test_reparented_markup(self): + markup = '

foo

\n

bar

' + soup = self.soup(markup) + self.assertEqual("

foo

\n

bar

", soup.body.decode()) + self.assertEqual(2, len(soup.find_all('p'))) + + + def test_reparented_markup_ends_with_whitespace(self): + markup = '

foo

\n

bar

\n' + soup = self.soup(markup) + self.assertEqual("

foo

\n

bar

\n", soup.body.decode()) + self.assertEqual(2, len(soup.find_all('p'))) + + def test_reparented_markup_containing_identical_whitespace_nodes(self): + """Verify that we keep the two whitespace nodes in this + document distinct when reparenting the adjacent tags. + """ + markup = '
' + soup = self.soup(markup) + space1, space2 = soup.find_all(string=' ') + tbody1, tbody2 = soup.find_all('tbody') + assert space1.next_element is tbody1 + assert tbody2.next_element is space2 + + def test_reparented_markup_containing_children(self): + markup = '' + soup = self.soup(markup) + noscript = soup.noscript + self.assertEqual("target", noscript.next_element) + target = soup.find(string='target') + + # The 'aftermath' string was duplicated; we want the second one. + final_aftermath = soup.find_all(string='aftermath')[-1] + + # The