Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Novo spider base]: BR Transparência #1155

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions data_collection/gazette/spiders/ba/ba_candeias.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
from datetime import date

from gazette.spiders.base.brtransparencia import BaseBrTransparenciaSpider


class BaCandeiasSpider(BaseBrTransparenciaSpider):
name = "ba_candeias"
TERRITORY_ID = "2906501"
allowed_domains = ["www.camaraibicoara.ba.gov.br", "api.brtransparencia.com.br"]
start_urls = ["https://www.camaraibicoara.ba.gov.br/diario.html"]
Comment on lines +9 to +10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@talesmota o arquivo está com o nome de ba_candeias, mas ao abrir a url caimos no site da Câmara Municipal de Ibicoara-Bahia, como tanbém podemos usar apenas o dôminio da api transparência

Suggested change
allowed_domains = ["www.camaraibicoara.ba.gov.br", "api.brtransparencia.com.br"]
start_urls = ["https://www.camaraibicoara.ba.gov.br/diario.html"]
allowed_domains = ["brtransparencia.com.br"]
start_urls = ["http://cmcandeiasba.brtransparencia.com.br/diario.html"]

start_date = date(2022, 12, 29)
power = "legislative"
11 changes: 11 additions & 0 deletions data_collection/gazette/spiders/ba/ba_conceicao_do_almeida_2024.py
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@talesmota neste caso aqui , o raspador atual não está funcionando corrretamente e redireciona para este novo site (http://conceicaodoalmeida.ba.gov.br), dessa forma você pode sobreescrever o arquivo em vez de criar um novo, como remover o ano do nome do raspador

Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
from datetime import date

from gazette.spiders.base.brtransparencia import BaseBrTransparenciaSpider


class BaConceicaoDoAlmeidaSpider(BaseBrTransparenciaSpider):
name = "ba_conceicao_do_almeida_2024"
TERRITORY_ID = "2908309"
allowed_domains = ["www.conceicaodoalmeida.ba.gov.br", "api.brtransparencia.com.br"]
start_urls = ["https://www.conceicaodoalmeida.ba.gov.br/diario.html"]
start_date = date(2019, 5, 3)
11 changes: 11 additions & 0 deletions data_collection/gazette/spiders/ba/ba_ibicoara.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
from datetime import date

from gazette.spiders.base.brtransparencia import BaseBrTransparenciaSpider


class BaIbicoaraSpider(BaseBrTransparenciaSpider):
name = "ba_ibicoara"
TERRITORY_ID = "2912202"
allowed_domains = ["www.camaraibicoara.ba.gov.br", "api.brtransparencia.com.br"]
start_urls = ["https://www.camaraibicoara.ba.gov.br/diario.html"]
start_date = date(2020, 2, 1)
11 changes: 11 additions & 0 deletions data_collection/gazette/spiders/ba/ba_itaquara_2024.py
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@talesmota neste caso aqui , o raspador atual não está funcionando corrretamente e redireciona para este novo site (https://www.itaquara.ba.gov.br/), dessa forma você pode sobreescrever o arquivo em vez de criar um novo, como remover o ano do nome do raspador

Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
from datetime import date

from gazette.spiders.base.brtransparencia import BaseBrTransparenciaSpider


class BaItaquaraSpider(BaseBrTransparenciaSpider):
name = "ba_itaquara_2024"
TERRITORY_ID = "2916708"
allowed_domains = ["www.itaquara.ba.gov.br", "api.brtransparencia.com.br"]
start_urls = ["https://www.itaquara.ba.gov.br/diario.html"]
start_date = date(2019, 1, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@talesmota pesquisando no site deste raspador , o mais antigo encontrado foi de 26/07/2019

Suggested change
start_date = date(2019, 1, 1)
start_date = date(2019, 7, 26)

15 changes: 15 additions & 0 deletions data_collection/gazette/spiders/ba/ba_porto_seguro.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
from datetime import date

from gazette.spiders.base.brtransparencia import BaseBrTransparenciaSpider


class BaPortoSeguroSpider(BaseBrTransparenciaSpider):
name = "ba_porto_seguro"
TERRITORY_ID = "2925303"
allowed_domains = [
"cmportoseguroba.brtransparencia.com.br",
"api.brtransparencia.com.br",
]
start_urls = ["https://cmportoseguroba.brtransparencia.com.br/diario.html"]
start_date = date(2022, 12, 19)
power = "legislative"
15 changes: 15 additions & 0 deletions data_collection/gazette/spiders/ba/ba_rio_real.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
from datetime import date

from gazette.spiders.base.brtransparencia import BaseBrTransparenciaSpider


class BaRioRealSpider(BaseBrTransparenciaSpider):
name = "ba_rio_real"
TERRITORY_ID = "2927002"
allowed_domains = [
"cmriorealba.brtransparencia.com.br",
"api.brtransparencia.com.br",
]
start_urls = ["https://http://cmriorealba.brtransparencia.com.br/diario.html"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@talesmota acredito que o link aqui seja apenas http:

Suggested change
start_urls = ["https://http://cmriorealba.brtransparencia.com.br/diario.html"]
start_urls = ["http://cmriorealba.brtransparencia.com.br/diario.html"]

start_date = date(2022, 12, 29)
power = "legislative"
12 changes: 12 additions & 0 deletions data_collection/gazette/spiders/ba/ba_saude_2024.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
from datetime import date

from gazette.spiders.base.brtransparencia import BaseBrTransparenciaSpider


class BaSaudeSpider(BaseBrTransparenciaSpider):
name = "ba_saude_2024"
TERRITORY_ID = "2929800"
allowed_domains = ["pmsaudeba.brtransparencia.com.br", "api.brtransparencia.com.br"]
start_urls = ["https://pmsaudeba.brtransparencia.com.br/diario.html"]
start_date = date(2024, 1, 31)
power = "executive"
68 changes: 68 additions & 0 deletions data_collection/gazette/spiders/base/brtransparencia.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
import re
from datetime import datetime

from scrapy import Request
from scrapy.selector import Selector

from gazette.items import Gazette
from gazette.spiders.base import BaseGazetteSpider


class BaseBrTransparenciaSpider(BaseGazetteSpider):
name = ""
TERRITORY_ID = ""
allowed_domains = []
start_urls = [""]
power = "executive"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@talesmota esse atributo não está especificado na clase BaseGazetteSpider , assim pensnado em flexibilidade não é recomendado a adição de atributos em classe-base

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oi, @victorfernandesraton! Fico contente com as suas considerações. O que você sugere?

Eu segui por esse caminho tendo em mente o Princípio Aberto-Fechado do SOLID. Com base nesse princípio e nas experiências que já tive em outros projetos, no momento da implementação, me pareceu apropriado especializar uma classe para tratar desse layout específico. Além disso, ao explorar o código, percebi que outros arquivos também fazem essa especialização de atributos, como:

  • data_collection/gazette/spiders/base/atende_layoutdois.py
  • data_collection/gazette/spiders/base/sai.py
  • data_collection/gazette/spiders/base/doem.py
  • data_collection/gazette/spiders/base/dionet.py

A ideia de listar os parâmetros explicitamente na superclasse foi justamente para deixar claro quais dados devem ser definidos nas classes filhas. Mas estou totalmente aberto a aprender o padrão de código de vocês para contribuir de forma cada vez mais alinhada.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pelo que vi realmente não temos outra alternativa, vou aprovar ao menos essa etapa

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@talesmota este valor padrão aqui , acredito que seja melhor remover o atributo da classe mãe para que os desenvolvedores dos próximos raspadores sejam induzidos a verificar e informar em cada caso.

Suggested change
power = "executive"


def _extract_code_from_response_text(self, response_text, field="entity"):
return re.search(
rf'var {field}(\ )*=(\ )*["|\'](.+?)["|\']',
response_text,
re.IGNORECASE,
).groups()[2]

def _extract_entity_code(self, response):
response_text = response.text
try:
response_entity = self._extract_code_from_response_text(
response_text, field="entity"
)
except AttributeError as exc:
raise AttributeError("Was not possible to extract the entity code") from exc
try:
response_code = self._extract_code_from_response_text(
response_text, field="code"
)
except AttributeError as exc:
raise AttributeError("Was not possible to extract the code") from exc
Comment on lines +31 to +38
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@talesmota nesses casos de exceção , é preferivél que deixe o raspador quebrar direto apenas, ou se preferir adicionar no log de erro e manter a exceção original

Suggested change
except AttributeError as exc:
raise AttributeError("Was not possible to extract the entity code") from exc
try:
response_code = self._extract_code_from_response_text(
response_text, field="code"
)
except AttributeError as exc:
raise AttributeError("Was not possible to extract the code") from exc
except AttributeError:
self.logger.error("Was not possible to extract the entity code")
raise
try:
response_code = self._extract_code_from_response_text(
response_text, field="code"
)
except AttributeError:
self.logger.error("Was not possible to extract the code")
raise


api_url = f"https://api.brtransparencia.com.br/api/diariooficial/filtro/{response_entity}/{response_code}/{self.start_date}/{self.end_date}/-1/-1"
yield Request(api_url)

def start_requests(self):
# getting the entity and code from inner JS Content file
url = self.start_urls[0].replace("/diario.html", "/js/content.js")

yield Request(url, callback=self._extract_entity_code)

def parse(self, response):
for entry in response.json():
edition_date = datetime.strptime(
entry["dat_publicacao_dio"], "%Y-%m-%dT%H:%M:%S"
).date()
extra_edition = True if entry["des_extra_dio"] is not None else False
edition_number = int(entry["num_diario_oficial_dio"])
gezzetes = Selector(text=entry["des_resumo_dio"]).css("a")
urls = []
for item in gezzetes:
link = item.css("a::attr(href)").get()
urls.append(link)

yield Gazette(
edition_number=edition_number,
date=edition_date,
file_urls=urls,
is_extra_edition=extra_edition,
power=self.power,
)
Loading