This repository provides a set of tools for searching and extracting videos from VHAKG, a multi-modal knowledge graph (MMKG) of multi-view videos of daily activities.
- Local machine (RAM: 32GB, HDD: free space 150GB)
- If there is not enough free memory, loading will be skipped; increase Docker's memory allocation. We have allocated 16 GB of memory to Docker and confirmed that it works. It may work with a little less.
- Install Docker
- Download VHAKG
- Run
mkdir RDF
. - Place VHAKG's
.ttl
files onRDF/
only for the first time- Important: Please do not place any files other than
.ttl
under theRDF/
. Please delete.DS_Store
if it exists.
- Important: Please do not place any files other than
- Run
chmod +x entrypoint.sh
only for the first time - Run
docker compose up --build -d
- Important: If you are not using Apple Silicon, you must change the GraphDB image in compose.yaml from
ontotext/graphdb:10.4.4-arm64
toontotext/graphdb:10.4.4
- Important: If you are not using Apple Silicon, you must change the GraphDB image in compose.yaml from
- Wait for data to be loaded until the Docker GraphDB container displays the log
[main] INFO com.ontotext.graphdb.importrdf.Preload - Finished
. - Open http://localhost:5050
- Please wait a moment when you open first time, as the back-end system needs to load the activity data.
- Select the search tool you would like to use
Note
You can switch between two types of search tools by clicking the button at the top left of each page.
- Searching by activities
- Searching by actions
- Perform the same steps as in GUI
- Run
cd cli
- Run
pip install -r requirements.txt
only for the first time - Select the tool you would like to use:
- Search by activities
- Run
python mmkg-search.py -h
if you want to know command arguments - Run
python mmkg-search.py args
- Run
- Search by actions
- Run
python action-object-search.py -h
if you want to know command arguments - Run
python action-object-search.py args
- Run
- Search by activities
Extract the video segment of the "grab" part from the camera4’s video of "clean_kitchentable1" in scene1.
python mmkg-search.py clean_kitchentable1 scene1 camera4 . -a grab
Extract videos which contain an event "put" and its main object is "bread" and its target object is "fryingpan".
python action-object-search.py put bread -t fryingpan -f .
- Users familiar with SPARQL can use the GraphDB SPARQL endpoint at localhost:7200/sparql.
- Run
mkdir RDF
only for the first time - Place RDF Data on RDF/ only for the first time
- Run
chmod +x entrypoint.sh
only for the first time - Run
COMPOSE_FILE=compose.yaml:development.yaml docker compose up
- Wait for data to be loaded until the Docker GraphDB container displays the log
[main] INFO com.ontotext.graphdb.importrdf.Preload - Finished
. - Open http://localhost:3000
- Run
docker compose exec app-dev sh -c "cd /app && yarn lint"
- Run
docker compose exec app-dev sh -c "cd /app && yarn format"
- Run
pyenv install miniforge3-4.14.0-2
- Run
pyenv virtualenv miniforge3-4.14.0-2 vhakg-tools
An experimental example of dataset creation and LVLM evaluation using VHAKG
- Run
pip install notebook
- Run
jupyter notebook
- Open&Run create_benchmark_dataset.ipynb
- Run
pip install openai
- Run
jupyter notebook
- Open&Run evaluate_lvlm.ipynb with your OpenAI API key