Skip to content

Commit

Permalink
Merge main into dev (#30)
Browse files Browse the repository at this point in the history
* Fix encoding error

* Automatic update with GitHub Actions

* Fix encoding error in CISA

* Update CISASpider.py

* Automatic update with GitHub Actions

* Create LICENSE

* Update main.yml

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Fix ZDISpider Date column

* Automatic update with GitHub Actions

* Remove encode decode methods in mdtemplate

* Automatic update with GitHub Actions

* Fix link selector not getting links correctly

* Automatic update with GitHub Actions

* Add temporary link to IBMCloud

* Automatic update with GitHub Actions

* Limit number of data to seven

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Add logo

* Update README

* Update README

* Change image

* Update README and docs

* Update README and docs

* Update README and docs

* Automatic update with GitHub Actions

* Update README

* Update README

* Fix error in header encoding

* Fix error in header encoding

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Create Dependabot.yml for automatic version updates

* Bump python-dotenv from 0.19.2 to 0.20.0

Bumps [python-dotenv](https://github.com/theskumar/python-dotenv) from 0.19.2 to 0.20.0.
- [Release notes](https://github.com/theskumar/python-dotenv/releases)
- [Changelog](https://github.com/theskumar/python-dotenv/blob/master/CHANGELOG.md)
- [Commits](theskumar/python-dotenv@v0.19.2...v0.20.0)

---
updated-dependencies:
- dependency-name: python-dotenv
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump scrapy from 2.5.1 to 2.6.2

Bumps [scrapy](https://github.com/scrapy/scrapy) from 2.5.1 to 2.6.2.
- [Release notes](https://github.com/scrapy/scrapy/releases)
- [Changelog](https://github.com/scrapy/scrapy/blob/master/docs/news.rst)
- [Commits](scrapy/scrapy@2.5.1...2.6.2)

---
updated-dependencies:
- dependency-name: scrapy
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump requests from 2.27.1 to 2.28.1

Bumps [requests](https://github.com/psf/requests) from 2.27.1 to 2.28.1.
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](psf/requests@v2.27.1...v2.28.1)

---
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump coverage from 6.3.2 to 6.4.2

Bumps [coverage](https://github.com/nedbat/coveragepy) from 6.3.2 to 6.4.2.
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](nedbat/coveragepy@6.3.2...6.4.2)

---
updated-dependencies:
- dependency-name: coverage
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump selenium from 3.141.0 to 4.3.0

Bumps [selenium](https://github.com/SeleniumHQ/Selenium) from 3.141.0 to 4.3.0.
- [Release notes](https://github.com/SeleniumHQ/Selenium/releases)
- [Commits](SeleniumHQ/selenium@selenium-3.141.0...selenium-4.3.0)

---
updated-dependencies:
- dependency-name: selenium
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Revert "Bump selenium from 3.141.0 to 4.3.0"

* Automatic update with GitHub Actions

* Add OBS Vigilance Spider (#20)

* Add OBS Vigilance Spider

* Automatic update with GitHub Actions

Co-authored-by: karimhabush <37211852+karimhabush@users.noreply.github.com>
Co-authored-by: GitHub Action <action@github.com>

* Automatic update with GitHub Actions

* Add VulDB Spider (#23)

* Add OBS Vigilance Spider

* Add VulDB spider

Co-authored-by: karimhabush <37211852+karimhabush@users.noreply.github.com>

* Automatic update with GitHub Actions

* Remove the ugly logo 🙂

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

* Automatic update with GitHub Actions

Co-authored-by: GitHub Action <action@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: chaymae-jhr <67756131+chaymae-jhr@users.noreply.github.com>
  • Loading branch information
4 people committed Aug 10, 2022
1 parent 77286a7 commit bc71775
Show file tree
Hide file tree
Showing 20 changed files with 232 additions and 71 deletions.
11 changes: 11 additions & 0 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates

version: 2
updates:
- package-ecosystem: "pip"
directory: "/" #
schedule:
interval: "daily"
4 changes: 2 additions & 2 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ jobs:
- name: Commit changes
uses: EndBug/add-and-commit@v8
with:
author_name: Karim Habouch
author_email: karim.habush@gmail.com
author_name: GitHub Action
author_email: action@github.com
message: 'Automatic update with GitHub Actions'
add: 'README.md'

21 changes: 21 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2022 karimhabush

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
146 changes: 96 additions & 50 deletions README.md

Large diffs are not rendered by default.

Empty file added docs/CONTRIBUTING.md
Empty file.
Binary file added docs/images/logo.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
20 changes: 11 additions & 9 deletions main.py
Original file line number Diff line number Diff line change
@@ -1,31 +1,33 @@
from spiders.IBMcloudSpider import IBMCloudSpider
from spiders.VigilanceSpider import VigilanceSpider
from spiders.CISASpider import CisaSpider
from spiders.CertFrSpider import CertFrSpider
from spiders.DgssiSpider import DgssiSpider
from spiders.VulDBSpider import VulDBSpider
from spiders.ZDISpider import ZDISpider
from scrapy.crawler import CrawlerProcess
from datetime import datetime
from datetime import datetime, timezone



def main():
now = datetime.now().strftime("%d/%m/%Y %H:%M:%S")

item = f"""<div id="top"></div>\n\n## CyberOwl \n> Last Updated {now} \n\n
A daily updated summary of the most frequent types of security incidents currently being reported from different sources.\n\n
### Jump to \n * [CISA](#cisa-arrow_heading_up)\n* [MA-CERT](#ma-cert-arrow_heading_up)\n* [CERT-FR](#cert-fr-arrow_heading_up)
\n* [IBMCLOUD](#ibmcloud-arrow_heading_up)\n* [ZeroDayInitiative](#zerodayinitiative-arrow_heading_up)\n\n"""
def main():
now = datetime.now(timezone.utc).strftime("%d/%m/%Y %H:%M:%S")
item = f"""<div id="top"></div>\n\n## CyberOwl \n> Last Updated {now} UTC \n\nA daily updated summary of the most frequent types of security incidents currently being reported from different sources.\n\n--- \n\n### :kangaroo: Jump to \n | CyberOwl Sources | Description |\n|---|---|\n| [US-CERT](#us-cert-arrow_heading_up) | United States Computer Emergency and Readiness Team. |\n| [MA-CERT](#ma-cert-arrow_heading_up) | Moroccan Computer Emergency Response Team. |\n| [CERT-FR](#cert-fr-arrow_heading_up) | The French national government Computer Security Incident Response Team. |\n| [IBM X-Force Exchange](#ibmcloud-arrow_heading_up) | A cloud-based threat intelligence platform that allows to consume, share and act on threat intelligence. |\n| [ZeroDayInitiative](#zerodayinitiative-arrow_heading_up) | An international software vulnerability initiative that was started in 2005 by TippingPoint. |\n| [OBS Vigilance](#obs-vigilance-arrow_heading_up) |Vigilance is an initiative created by OBS (Orange Business Services) since 1999 to watch public vulnerabilities and then offer security fixes, a database and tools to remediate them. |\n| [VulDB](#vuldb-arrow_heading_up) | Number one vulnerability database documenting and explaining security vulnerabilities, threats, and exploits since 1970. |\n\n> Suggest a source by opening an [issue](https://github.com/karimhabush/cyberowl/issues)! :raised_hands:\n\n"""

with open("README.md", "w") as f:
with open("README.md", "w", encoding="utf-8") as f:
f.write(item)
f.close()

try:
process = CrawlerProcess()
process.crawl(CisaSpider)
process.crawl(CisaSpider)
process.crawl(DgssiSpider)
process.crawl(CertFrSpider)
process.crawl(IBMCloudSpider)
process.crawl(ZDISpider)
process.crawl(VigilanceSpider)
process.crawl(VulDBSpider)
process.start()

except Exception:
Expand Down
2 changes: 1 addition & 1 deletion mdtemplate.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ def _validate_data(self):
"The dictionnaries in _data arrow is expecting the element _date.")

def _set_heading(self):
return f"""## {self.SOURCE} [:arrow_heading_up:](#cyberowl)\n"""
return f"""---\n### {self.SOURCE} [:arrow_heading_up:](#cyberowl)\n"""

def _set_table_headers(self):
return """|Title|Description|Date|\n|---|---|---|\n"""
Expand Down
Binary file modified requirements.txt
Binary file not shown.
4 changes: 2 additions & 2 deletions spiders/CISASpider.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@ def parse(self, response):

_data.append(ITEM)

_to_write = Template("CISA", _data)
_to_write = Template("US-CERT", _data)

with open("README.md", "a") as f:
with open("README.md", "a", encoding="utf-8") as f:
f.write(_to_write._fill_table())
f.close()
8 changes: 4 additions & 4 deletions spiders/IBMcloudSpider.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@

class IBMCloudSpider(scrapy.Spider):
name = "countries_spider"
allowed_domains = ["toscrape.com"]
allowed_domains = ["ibmcloud.com"]

# Using a dummy website to start scrapy request
def start_requests(self):
url = "http://quotes.toscrape.com"
url = "https://exchange.xforce.ibmcloud.com/activity/list?filter=Vulnerabilities"
yield scrapy.Request(url=url, callback=self.parse_countries)

def parse_countries(self, response):
Expand Down Expand Up @@ -42,7 +42,7 @@ def parse_countries(self, response):
# Using Scrapy's yield to store output instead of explicitly writing to a JSON file
_data = []
for country in countries:
LINK = country.find_element_by_xpath(".//a").get_attribute("href")
LINK = "https://exchange.xforce.ibmcloud.com/activity/list?filter=Vulnerabilities"
DATE = country.find_element_by_xpath(".//td[4]").text
TITLE = country.find_element_by_xpath(".//a").text

Expand All @@ -55,7 +55,7 @@ def parse_countries(self, response):

_data.append(ITEM)
num_bulletins += 1
if num_bulletins >= 8:
if num_bulletins >= 7:
break

_to_write = Template("IBMCloud", _data)
Expand Down
38 changes: 38 additions & 0 deletions spiders/VigilanceSpider.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
import scrapy
from mdtemplate import Template


class VigilanceSpider(scrapy.Spider):
name = 'vigilance'
start_urls = [
'https://vigilance.fr/?action=1135154048&langue=2'
]
def parse(self, response):
if('cached' in response.flags):
return
num_bulletins=0
_data = []
for bulletin in response.css("article > table"):
LINK = bulletin.xpath("descendant-or-self::tr/td/a/@href").get()
DATE = "Visit link for details"
TITLE = bulletin.xpath("descendant-or-self::tr/td/a").get().replace(
"\n", "").replace("\t", "").replace("\r", "").replace(" ", "").replace("|","-")
DESC = bulletin.xpath('descendant-or-self::tr/td/font/i/a/text()').get().replace(
"\n", "").replace("\t", "").replace("\r", "").replace(" ", "").replace("|","-")
ITEM = {
"_title": TITLE,
"_link": LINK,
"_date": DATE,
"_desc": DESC
}

_data.append(ITEM)
num_bulletins += 1
if num_bulletins >= 10:
break

_to_write = Template("OBS-Vigilance", _data)

with open("README.md", "a", encoding="utf-8") as f:
f.write(_to_write._fill_table())
f.close()
43 changes: 43 additions & 0 deletions spiders/VulDBSpider.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
from urllib import response
import scrapy
from mdtemplate import Template
from datetime import date


class VulDBSpider(scrapy.Spider):
name = 'VulDB'
start_urls = [
'https://vuldb.com/?live.recent'
]
def parse(self, response):
if('cached' in response.flags):
return
num_bulletins=0
_data = []
print(response.css("table>tr").extract())
for bulletin in response.css("table>tr"):
if num_bulletins==0:
num_bulletins+=1
continue
LINK = "https://vuldb.com/"+bulletin.xpath("descendant-or-self::td[4]//@href").get()
DATE = str(date.today())+" at "+bulletin.xpath("descendant-or-self::td[1]//text()").get()
TITLE = bulletin.xpath("descendant-or-self::td[4]//text()").get().replace(
"\n", "").replace("\t", "").replace("\r", "").replace(" ", "").replace("|","-")
DESC = "Visit link for details"
ITEM = {
"_title": TITLE,
"_link": LINK,
"_date": DATE,
"_desc": DESC
}

_data.append(ITEM)
num_bulletins += 1
if num_bulletins >= 11:
break

_to_write = Template("VulDB", _data)

with open("README.md", "a", encoding="utf-8") as f:
f.write(_to_write._fill_table())
f.close()
6 changes: 3 additions & 3 deletions spiders/ZDISpider.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@

class ZDISpider(scrapy.Spider):
name = "countries_spider"
allowed_domains = ["toscrape.com"]
allowed_domains = ["zerodayinitiative.com"]

# Using a dummy website to start scrapy request
def start_requests(self):
url = "http://quotes.toscrape.com"
url = "https://www.zerodayinitiative.com/advisories/published/"
yield scrapy.Request(url=url, callback=self.parse_countries)

def parse_countries(self, response):
Expand Down Expand Up @@ -42,7 +42,7 @@ def parse_countries(self, response):
_data = []
for country in countries:
LINK = country.find_element_by_xpath(".//a").get_attribute("href")
DATE = country.find_element_by_xpath(".//td[5]").text
DATE = country.find_element_by_xpath(".//td[6]").text
TITLE = country.find_element_by_xpath(".//a").text

ITEM = {
Expand Down
Binary file added spiders/__pycache__/CISASpider.cpython-310.pyc
Binary file not shown.
Binary file added spiders/__pycache__/CertFrSpider.cpython-310.pyc
Binary file not shown.
Binary file added spiders/__pycache__/DgssiSpider.cpython-310.pyc
Binary file not shown.
Binary file not shown.
Binary file added spiders/__pycache__/ZDISpider.cpython-310.pyc
Binary file not shown.
Binary file added spiders/__pycache__/__init__.cpython-310.pyc
Binary file not shown.

0 comments on commit bc71775

Please sign in to comment.