[TOC]
- #### Option-1: RasperberryPi
Linux-like system can do anything.
- Option-2: Arduino + NUC Computer
Sensors on Arduino:
have the Arduino manage sensors (motion detection, capacitive touch) and send a signal to the NUC when specific conditions are met (e.g., "user detected" or "hands on statue").
NUC Responds: The NUC computer would handle the more complex logic, like querying ChatGPT or controlling the TV and speaker, based on the signals received from the Arduino.
-
When a user stands in front of the statue, the motion sensor activates and prepares the system to interact.
-
When the user places their hands on the statue’s mouth, a response is triggered—this could involve illuminating specific LEDs or playing a sound.
-
If no motion is detected for a while, the LED strip emits a slow, calming breathing light to signal the system is in standby mode.
-
The user types questions via the keyboard, and the system sends the query to ChatGPT. The answer is displayed on the old TV and read aloud via the speaker.
User Movement | Device | Input | Output |
---|---|---|---|
Stand Far Away | "Come and ask as yes/no question" | TV/Speaker | |
Breathing Light | RGB LED Strip with PWM control | ||
For detecting user proximity to the statue | PIR Motion Sensor | ||
Put hand inside | To detect hand placement on the statue’s mouth. | Capacitive Touch Sensor or Pressure Sensor | |
Light On | RGB LED Strip with PWM control | ||
Ask Y?N question | Light On | RGB LED Strip with PWM control | |
Type question | Keyboard - Logitech K400 Plus | TV | |
Wait | Light blinking | RGB LED Strip with PWM control | |
Read out loud the answer | TV/Speaker | ||
Step away | "Come and ask as yes/no question" | TV/Speaker | |
Breathing Light | RGB LED Strip with PWM control | ||
For detecting user proximity to the statue | PIR Motion Sensor |
-
Raspberry Pi 4 (with WiFi and Bluetooth)
-
PIR Motion Sensor (long range)
- For detecting user proximity to the statue.
-
Capacitive Touch Sensor or Pressure Sensor
- To detect hand placement on the statue’s mouth.
-
RGB LED Strip with PWM control
- For breathing light effect.
-
Logitech K400 Plus Wireless Keyboard
- Mel recommend.
-
HDMI to VGA Adapter
- To connect the Raspberry Pi to the old TV.
-
USB or Bluetooth Speaker
- For audio output.
-
HDMI Cable
- For connecting the Raspberry Pi to the HDMI to VGA adapter.
-
VGA Cable
- For connecting the adapter to the old TV.
-
OpenAI API Key
-
Old TV
-
Multiple breadboard, wires, resistors
- For connecting sensors and motherboard
- Probably need soldering
We could either write python or C++ with library support.
-
A simple I/O terminal command-like GUI interface (by default)
- For input and output the text on TV screen
-
Sensor Inputs
-
User Inputs (by default)
-
ChatGPT Request![image-20240823031714664](/Users/radium/Library/Application Support/typora-user-images/image-20240823031714664.png)
- Please answer my question with yes/no, or pick from the following reasons why chatGPT can't answer those questions with yes/no (bad question). These are the bad question categories:
- How would you like ChatGPT to respond?
- No information access question
- Time limit information question
- No sensor question
- No right or wrong answer question
- Dependent on Real-Time Data
- Requiring Personal or Contextual Information About the User
- Highly Subjective Questions / Personal Opinions
- Exact Predictions
- Deeply Personal Issues
- Medical or Legal Advice
- Sensory Input-Based Question
- Questions Involving Human Emotions or Relationships
- Interpretation of Art or Literature
- Speculative or Theoretical Queries
- General Knowledge and Fact Verification
-
Display output(sys i/o)
-
Reading the Response Aloud
- text-to-speech library
- Different lib might sound differently, need to try and explore which one is the best
git clone https://github.com/RadiumLZhang/Mouth-of-Truth.git
-
main.py – The main application file that handles the user interaction flow.
-
sensors.py – Sensor management, including PIR motion and microphone(probably need to adjust later).
-
chatgpt_interface.py – Manages interaction with the ChatGPT API.
-
output_devices.py – Controls TV display, speaker, and LED light.
-
utils.py – Utility functions.
sudo apt-get install python3-rpi.gpio
https://www.amazon.com/dp/B07KBWVJMP/?coliid=I3B5R132ZHC8H3&colid=2SPLQP9IFVO3J&psc=1&ref_=list_c_wl_lv_cv_lig_dp_it
- Simulated Signal ✅
python3 test_pir_sensor.py
- Real Signal ✅
python3 test_pir_sensor_physical.py
HC-SR501 PIR Motion Sensor Connection:
• VCC → Connect to Raspberry Pi’s 5V pin.
• GND → Connect to Raspberry Pi’s GND pin.
• OUT → Connect to Raspberry Pi’s GPIO 17.
sudo apt-get install python3-pyaudio
pip3 install speechrecognition --break-system-packages
pip3 install pyttsx3 --break-system-packages # For text-to-speech (optional)
Voltage:DC4-7V
Communication interface:Single-wire communication
LED Chip:WS2812B
The rpi_ws281x library is designed to control WS2812B LEDs on Raspberry Pi.
sudo pip3 install rpi_ws281x
Meet the problem
Then we will do a virtual enviroment
1. Update Raspberry Pi
sudo apt update
sudo apt upgrade
2. Install System-Wide Dependencies
sudo apt install python3-rpi.gpio python3-pyaudio python3-venv
• python3-rpi.gpio: For controlling the GPIO pins.
• python3-pyaudio: For handling audio input/output (used for voice recognition).
• python3-venv: To create and manage virtual environments.
3. Set Up a Virtual Environment for Project
cd ~/Mouth-of-Truth
python3 -m venv venv
source venv/bin/activate
4. Install Python Libraries for the Project
pip install rpi_ws281x
pip install SpeechRecognition pyttsx3
pip install openai
5. Configure GPIO and PWM Permissions
sudo venv/bin/python your_script.py
6. Run Project
source venv/bin/activate
sudo python3 main.py
VCC (5V): Connect to the 5V pin
GND: Connect to a GND pin
DIN (Data Input): Connect to a PWM-capable GPIO pin - (GPIO 18).
- Tried GT Visitor, EduRoam, GT Other
https://gatech.service-now.com/home?id=kb_article_view&sysparm_article=KB0026531
https://gatech.service-now.com/home?id=kb_article_view&sysparm_article=KB0026877
- Reinstalled entire PI System -> the python env broken
- GPIO permission issue
chmod +x main.py
quick fix, might need to have another solution
- main user flow test
- While motioned detected
- Stop breathing light
- print GPT Request
- Microphone Input - only one Aux Input, either spreaker or microphone, we need to have another devices to support both functionalities.
- Spaker Output: play wav file - prerecored.
-
Merge the pull request
-
Install required python env-audio
pip install pydub simpleaudio
-
Reinstall virtual environment
sudo apt update
sudo apt install build-essential libssl-dev libffi-dev python3-dev
pip install openai
pip install rpi_ws281x --break-system-package
pip install SpeechRecognition pyttsx3
pip3 install RPi.GPIO
pip install pydub
- LED Light
VCC (5V): Connect to the 5V pin
GND: Connect to a GND pin
DIN (Data Input): Connect to a PWM-capable GPIO pin - (GPIO 18).
- Motion Detect
VCC → Connect to Raspberry Pi’s 5V pin.
GND → Connect to Raspberry Pi’s GND pin.
OUT → Connect to Raspberry Pi’s GPIO 17.
sudo apt install pipx
- With background noise test
sudo venv/bin/python main.py
sudo apt-get install portaudio19-dev
sudo venv/bin/pip install pyaudio
sudo apt-get install flac
sudo apt-get install espeak-ng
export OPENAI_API_KEY="your_openai_api_key_here"
sudo venv/bin/pip install dotenv python-dotenv
- Change the UI interface font size