This project is dedicated to retrieving and analyzing house advertisements from real estate websites. It provides a convenient way to gather information for analysis, research, or any other purposes related to the real estate domain. Below are the main steps:
- Data Collection: Collects data from real estate websites using their APIs.
- Data Analysis: Analyzes the collected data using Pandas and Machine Learning (ML) algorithms.
- Report Generation: Generates a report summarizing the findings from the analysis.
The project is developed entirely using Python and follows object-oriented programming (OOP) practices. The initial template is provided by Cookiecutter Data Science.
- Data scientists interested in real estate data extraction and analysis.
- Real estate companies looking to integrate listing data into their systems.
- Anyone curious about exploring the world of real estate through data.
Note that this code is provided free of charge as is. For any bugs, see the issue tracker.
To use the tool, follow these steps:
-
Ensure you have Python 3.10 and pip installed on your system.
-
Clone the repository to your local machine:
git clone https://github.com/matteorosato/house-finder.git
-
Navigate to the project directory:
cd house-finder
-
Create a virtual environment for the project:
- On Windows:
py -m venv .venv
- On macOS and Linux:
python3 -m venv .venv
- On Windows:
-
Activate the virtual environment:
- On Windows:
.venv\Scripts\activate
- On macOS and Linux:
source .venv/bin/activate
- On Windows:
-
Install the required dependencies by running:
- On Windows:
py -m pip install -r requirements.txt
- On macOS and Linux:
python3 -m pip install -r requirements.txt
- On Windows:
-
Fill the
.env
file with the required environment variables. Use the.env.example
file as reference. -
Fill the
config.toml
file according to your preferences and needs. -
Run the tool:
# Download the data and create the dataset python src/run.py
Currently, the following websites are supported:
This tool utilizes the APIs provided by Idealista to extract real estate listing data. To execute the tool, you need to obtain an API_KEY and SECRET by requesting access through the following link: Idealista API Access Request.
Please note that the free access is limited to 100 requests per month and 1 request per second. Therefore, it's important to configure the filtering parameters carefully to avoid an unnecessary number of requests.
For further information, refer to the documents located in the references folder.
├── LICENSE
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── docs <- A default Sphinx project; see sphinx-doc.org for details
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
│ the creator's initials, and a short `-` delimited description, e.g.
│ `1.0-jqp-initial-data-exploration`.
│
├── references <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used in reporting
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
│
├── setup.py <- makes project pip installable (pip install -e .) so src can be imported
├── src <- Source code for use in this project.
│ ├── __init__.py <- Makes src a Python module
│ │
│ ├── data <- Scripts to download or generate data
│ │ └── make_dataset.py
│ │
│ ├── features <- Scripts to turn raw data into features for modeling
│ │ └── build_features.py
│ │
│ ├── models <- Scripts to train models and then use trained models to make
│ │ │ predictions
│ │ ├── predict_model.py
│ │ └── train_model.py
│ │
│ └── visualization <- Scripts to create exploratory and results oriented visualizations
│ └── visualize.py
│
└── tox.ini <- tox file with settings for running tox; see tox.readthedocs.io
Project based on the cookiecutter data science project template. #cookiecutterdatascience