diff --git a/docs/doc/assets/handposex_14class.jpg b/docs/doc/assets/handposex_14class.jpg
new file mode 100644
index 00000000..0ca5f145
Binary files /dev/null and b/docs/doc/assets/handposex_14class.jpg differ
diff --git a/docs/doc/en/sidebar.yaml b/docs/doc/en/sidebar.yaml
index d49f6260..af59b463 100644
--- a/docs/doc/en/sidebar.yaml
+++ b/docs/doc/en/sidebar.yaml
@@ -85,6 +85,8 @@ items:
label: Hand landmarks
- file: vision/body_pose_classification.md
label: Human pose classifier
+ - file: vision/hand_gesture_classification.md
+ label: Hand geature classifier
- file: vision/maixhub_train.md
label: MaixHub online AI training
- file: vision/customize_model_yolov5.md
diff --git a/docs/doc/en/video/uvc_streaming.md b/docs/doc/en/video/uvc_streaming.md
index 201c6a4a..f5b06d89 100644
--- a/docs/doc/en/video/uvc_streaming.md
+++ b/docs/doc/en/video/uvc_streaming.md
@@ -68,7 +68,7 @@ uvcs.show(img)
This approach offers high performance with a single-process implementation, but USB functionality will only be available when the process is running. Therefore, when stopping this process, it's important to note that the enabled `Rndis` and `NCM` functionalities will temporarily become inactive, causing a network disconnection.
**Reference example source code path:**
-`MaixPy/examples/vision/streaming/uvc_stream.py`
+`MaixPy/examples/vision/streaming/uvc_server.py`
**Also packaged as an app source code path:**
`MaixCDK/projects/app_uvc_camera/main/src/main.cpp`
diff --git a/docs/doc/en/vision/hand_gesture_classification.md b/docs/doc/en/vision/hand_gesture_classification.md
new file mode 100644
index 00000000..500b64e1
--- /dev/null
+++ b/docs/doc/en/vision/hand_gesture_classification.md
@@ -0,0 +1,46 @@
+---
+title: MaixCAM MaixPy Hand Gesture Classification Based on Hand Keypoint Detection
+---
+
+## Introduction
+
+The `MaixCAM MaixPy Hand Gesture Classification Based on Hand Keypoint Detection` can classify various hand gestures.
+
+The current dataset used is the `14-class static hand gesture dataset` with a total of 2850 samples divided into 14 categories.
+[Dataset Download Link (Baidu Netdisk, Password: 6urr)](https://pan.baidu.com/s/1Sd-Ad88Wzp0qjGH6Ngah0g)
+
+![](../../assets/handposex_14class.jpg)
+
+This app is implemented in `MaixPy/projects/app_hand_gesture_classifier/main.py`, and the main logic is as follows:
+
+1. Load the `14-class static hand gesture dataset` processed by the **Hand Keypoint Detection** model, extracting `20` relative wrist coordinate offsets.
+2. Initially train on the first `4` classes to support basic gesture recognition.
+3. Use the **Hand Keypoint Detection** model to process the camera input and visualize classification results on the screen.
+4. Tap the top-right `class14` button to add more samples and retrain the model for full `14-class` gesture recognition.
+5. Tap the bottom-right `class4` button to remove the added samples and retrain the model back to the `4-class` mode.
+6. Tap the small area between the buttons to display the last training duration at the top of the screen.
+7. Tap the remaining large area to show the currently supported gesture classes on the left side—**green** for supported, **yellow** for unsupported.
+
+## Demo Video
+
+
+
+1. The video demonstrates the `14-class` mode after executing step `4`, recognizing gestures `1-10` (default mapped to other meanings), **OK**, **thumbs up**, **finger heart** (requires the back of the hand, hard to demonstrate in the video but can be verified), and **pinky stretch**—a total of `14` gestures.
+
+2. Then, step `5` is executed, reverting to the `4-class` mode, where only gestures **1**, **5**, **10** (fist), and **OK** are recognizable. Other gestures fail to produce correct results. During this process, step `7` was also executed, showing the current `4-class` mode—only the first 4 gestures are green, and the remaining 10 are yellow.
+
+3. Step `4` is executed again, restoring the `14-class` mode, and previously unrecognized gestures in the `4-class` mode are now correctly identified.
+
+4. Finally, dual-hand recognition is demonstrated, and both hands' gestures are accurately recognized simultaneously.
+
+## Others
+
+The demo video captures the **maixvision** screen preview window in the top-right corner, matching the actual on-screen display.
+
+For detailed implementation, please refer to the source code and the above analysis.
+
+Further development or modification can be directly done based on the source code, which includes comments for guidance.
+
+If you need additional assistance, feel free to leave a message on **MaixHub** or send an email to the official company address.
diff --git a/docs/doc/zh/sidebar.yaml b/docs/doc/zh/sidebar.yaml
index 7ea7048e..f4c51475 100644
--- a/docs/doc/zh/sidebar.yaml
+++ b/docs/doc/zh/sidebar.yaml
@@ -85,6 +85,8 @@ items:
label: 手部关键点检测
- file: vision/body_pose_classification.md
label: 人体姿态分类器
+ - file: vision/hand_gesture_classification.md
+ label: 手势分类器
- file: vision/maixhub_train.md
label: MaixHub 在线训练 AI 模型
- file: vision/customize_model_yolov5.md
diff --git a/docs/doc/zh/video/uvc_streaming.md b/docs/doc/zh/video/uvc_streaming.md
index 405dc556..b86cb4d9 100644
--- a/docs/doc/zh/video/uvc_streaming.md
+++ b/docs/doc/zh/video/uvc_streaming.md
@@ -61,7 +61,7 @@ uvcs.show(img)
高性能单进程实现,但仅在运行时 USB 全部功能才可用,故停止该进程时需要注意仍启用的 `Rndis` 和 `NCM` 会暂时失效,断开网络链接。
-参考示例源码路径:`MaixPy/examples/vision/streaming/uvc_stream.py`
+参考示例源码路径:`MaixPy/examples/vision/streaming/uvc_server.py`
另有封装成 APP 的源码路径:`MaixCDK/projects/app_uvc_camera/main/src/main.cpp`
diff --git a/docs/doc/zh/vision/hand_gesture_classification.md b/docs/doc/zh/vision/hand_gesture_classification.md
new file mode 100644
index 00000000..9447adee
--- /dev/null
+++ b/docs/doc/zh/vision/hand_gesture_classification.md
@@ -0,0 +1,50 @@
+---
+title: MaixCAM MaixPy 基于手部关键点检测结果进行进行手势分类
+---
+
+
+## 简介
+
+由`MaixCAM MaixPy 基于手部关键点检测结果进行进行手势分类`可分类手势。
+
+目前使用的数据集为`14 类静态手势数据集`,[数据集下载地址(百度网盘 Password: 6urr )](https://pan.baidu.com/s/1Sd-Ad88Wzp0qjGH6Ngah0g),数据集共 2850 个样本,分为 14 类。
+
+
+![](../../assets/handposex_14class.jpg)
+
+
+该 app 实现位于 `MaixPy/projects/app_hand_gesture_classifier/main.py`,主要逻辑是
+
+1. 加载 `14 类静态手势数据集` 经 `手部关键点检测` 处理后的 `20` 个相对手腕的坐标偏移
+2. 初始训练前 `4` 个分类,以支持手势识别
+3. 加载 `手部关键点检测` 模型处理摄像头并通过该分类器将结果可视化在屏幕上
+4. 点击右上角 `class14` 可增添剩余分类样本再训练以达到 `14` 分类手势
+5. 点击右下角 `class4` 可移除上一步添加的分类样本再训练以达到 `4` 分类手势
+6. 点击按钮之间的小块区域,可在顶部显示分类器上一次训练的时长
+7. 点击其余大块区域,可在左侧显示当前支持的分类类别,绿色表示支持,黄色表示不支持
+
+
+
+## 效果视频
+
+
+1. 视频内容为执行了上述第 `4` 步后的 `14` 分类模式,可识别手势 `1-10` (默认对应其他英文释义),ok,大拇指点赞,比心(需要手背,拍摄时不好演示,可自行验证),小拇指伸展 一共 `14` 种手势。
+
+2. 紧接着执行第 `5` 步,回退到 `4` 分类模式,仅可识别 1,5,10(握拳)和 ok,其余的手势都无法识别到正常结果。期间也有执行 第 `7` 步展示了当前是 `4` 分类模式,因为除了前 4 种手势为绿,后 10 种全部为黄色显示。
+
+3. 再就是执行第 `4` 步,恢复到 `14` 分类模式,`4` 分类模式无法识别的手势现在也恢复正确识别了。
+
+4. 末尾展示了双手的识别,实测可同时正确识别两只手的手势。
+
+
+## 其它
+
+效果视频为捕获的 maixvision 右上的屏幕预览窗口而来,和屏幕实际显示内容一致。
+
+详细实现可见源码和上述分析了。
+
+二次开发或修改也可直接基于源码完成,内附有注释。
+
+如确实仍有需要协助的,可与 maixhub 上发帖留言或发 email 到公司邮箱。
\ No newline at end of file
diff --git a/docs/static/video/hand_gesture_demo.mp4 b/docs/static/video/hand_gesture_demo.mp4
new file mode 100644
index 00000000..7004c168
Binary files /dev/null and b/docs/static/video/hand_gesture_demo.mp4 differ
diff --git a/projects/app_hand_gesture_classifier/.gitignore b/projects/app_hand_gesture_classifier/.gitignore
new file mode 100644
index 00000000..babf76a7
--- /dev/null
+++ b/projects/app_hand_gesture_classifier/.gitignore
@@ -0,0 +1,5 @@
+
+build
+dist
+/CMakeLists.txt
+
diff --git a/projects/app_hand_gesture_classifier/LinearSVC.py b/projects/app_hand_gesture_classifier/LinearSVC.py
new file mode 100644
index 00000000..7bbc7c9c
--- /dev/null
+++ b/projects/app_hand_gesture_classifier/LinearSVC.py
@@ -0,0 +1,184 @@
+import numpy as np
+
+class LinearSVC:
+ class StandardScaler:
+ mean:np.ndarray
+ std:np.ndarray
+ def transform(self, X):
+ return (X - self.mean) / self.std
+
+ def fit_transform(self, X):
+ self.mean = np.mean(X, axis=0)
+ self.std = np.std(X, axis=0)
+ return self.transform(X)
+
+ def __init__(self, C=1.0, learning_rate=0.01, max_iter=1000):
+ self.C = C
+ self.learning_rate = learning_rate
+ self.max_iter = max_iter
+ self.scaler = self.StandardScaler()
+
+ def save(self, filename: str):
+ np.savez(filename,
+ C = self.C,
+ learning_rate = self.learning_rate,
+ max_iter = self.max_iter,
+ scaler_mean = self.scaler.mean,
+ scaler_std = self.scaler.std,
+ classes = self.classes,
+ _W = self._W,
+ _B = self._B,
+ )
+
+ @classmethod
+ def load(cls, filename: str):
+ npzfile = np.load(filename)
+ self = cls(
+ C=float(npzfile["C"]),
+ learning_rate=float(npzfile["learning_rate"]),
+ max_iter=float(npzfile["max_iter"])
+ )
+ self.scaler.mean = npzfile["scaler_mean"]
+ self.scaler.std = npzfile["scaler_std"]
+ self.classes = npzfile["classes"]
+ self._W = npzfile["_W"]
+ self._B = npzfile["_B"]
+ return self
+
+ def _train_binary_svm(self, X, y):
+ """
+ 训练一个二分类 SVM。
+ """
+ n_samples, n_features = X.shape
+ w = np.zeros(n_features)
+ b = 0
+ for _ in range(self.max_iter):
+ scores = np.dot(X, w) + b # 计算所有样本的预测得分
+ margin = y * scores # (n_samples,) 计算每个样本的 margin
+ mask = margin < 1 # 获取不满足条件的样本,满足 condition 即为支持向量
+ X_support = X[mask] # 支持向量
+ y_support = y[mask] # 支持向量的标签
+ if len(X_support) > 0: # 向量化更新公式
+ w -= self.learning_rate * (2 * w / n_samples - self.C * np.dot(X_support.T, y_support)) # 批量更新 w
+ b -= self.learning_rate * (-self.C * np.sum(y_support)) # 批量更新 b
+ return w, b
+
+ def fit(self, X, y):
+ """
+ 训练多分类 SVM。
+ 参数:
+ - X: (n_samples, n_features) 的特征矩阵
+ - y: (n_samples,) 的标签数组,值为多个类别
+ """
+ self.classes = np.unique(y) # 提取所有类别
+ self._W = np.zeros((len(self.classes), X.shape[1]))
+ self._B = np.zeros(len(self.classes))
+ for i, cls in enumerate(self.classes):
+ binary_y = np.where(y == cls, 1, -1) # 构造一对多的标签
+ w, b = self._train_binary_svm(X, binary_y)
+ self._W[i] = w
+ self._B[i] = b
+
+ def forward(self, X):
+ return np.dot(X, self._W.T) + self._B
+
+ def predict(self, X):
+ return self.classes[np.argmax(self.forward(X), axis=1)] # 返回得分最高的类别
+
+ def predict_with_confidence(self, X):
+ def softmax(x):
+ x_max = np.max(x, axis=-1, keepdims=True) # 处理数值稳定性:减去最大值
+ exp_x = np.exp(x - x_max)
+ return exp_x / np.sum(exp_x, axis=-1, keepdims=True)
+ res = self.forward(X) # (n_samples, n_classes)
+ confidences = softmax(res) # (n_samples, n_classes)
+ return self.classes[np.argmax(res, axis=1)], np.max(confidences, axis=1) # 返回得分最高的类别
+
+
+class LinearSVCManager:
+ def __init__(self, clf: LinearSVC=LinearSVC(), X=None, Y=None, pretrained=False):
+ if X is None:
+ X = np.empty((0, 0))
+ if Y is None:
+ Y = np.empty((0,))
+
+ # 转换为 NumPy 数组
+ if isinstance(X, list):
+ X = np.array(X)
+ if isinstance(Y, list):
+ Y = np.array(Y)
+
+ # 类型检查
+ if not isinstance(X, np.ndarray):
+ raise TypeError("X must be a list or numpy array.")
+ if not isinstance(Y, np.ndarray):
+ raise TypeError("Y must be a list or numpy array.")
+
+ if len(X) != len(Y):
+ raise ValueError("Length of X and Y must be equal.")
+ if len(Y) == 0:
+ raise ValueError("A classifier (clf) must be provided with training samples X and Y.")
+
+ if pretrained:
+ if clf is None:
+ raise ValueError("A pretrained classifier (clf) can't be `None`.")
+
+ if clf is None:
+ if pretrained:
+ raise ValueError("A pretrained classifier (clf) can't be `None`.")
+ clf = LinearSVC()
+
+ self.clf = clf
+ self.samples = (X, Y)
+
+ if not pretrained:
+ self.train()
+
+ def train(self):
+ X_scaled = self.clf.scaler.fit_transform(self.samples[0])
+ self.clf.fit(X_scaled, self.samples[1])
+ print(f"{len(self.samples[1])} samples have been trained.")
+
+ def test(self, X):
+ X = np.array(X)
+ if X.shape[-1] != self.samples[0].shape[1]:
+ raise ValueError("Tested data dimension mismatch.")
+ X_scaled = self.clf.scaler.transform(X)
+ return self.clf.predict_with_confidence(X_scaled)
+
+ def add(self, X, Y):
+ X = np.array(X)
+ Y = np.array(Y)
+
+ if X.shape[-1] != self.samples[0].shape[1]:
+ raise ValueError("Added data dimension mismatch.")
+
+ if len(self.samples[0])>0:
+ self.samples = (
+ np.vstack([self.samples[0], X]),
+ np.concatenate([self.samples[1], Y])
+ )
+ else:
+ self.samples = (X, Y)
+
+ self.train()
+
+ def rm(self, indices):
+ X, Y = self.samples
+
+ if any(idx < 0 or idx >= len(Y) for idx in indices):
+ raise IndexError("Index out of bounds.")
+
+ mask = np.ones(len(Y), dtype=bool)
+ mask[indices] = False
+
+ self.samples = (X[mask], Y[mask])
+
+ if len(self.samples[1]) > 0:
+ self.train()
+ else:
+ print("Warning: All data has been removed. Model is untrained now.")
+
+ def clear_samples(self):
+ self.samples = (np.empty((0, self.samples[0].shape[1])), np.empty((0,)))
+ print("All training samples have been cleared.")
diff --git a/projects/app_hand_gesture_classifier/README.md b/projects/app_hand_gesture_classifier/README.md
new file mode 100644
index 00000000..5f26edba
--- /dev/null
+++ b/projects/app_hand_gesture_classifier/README.md
@@ -0,0 +1,15 @@
+The touchscreen is segmented into four sections:
+
+1. The first two are circles located in the upper-right and lower-right corners.
+
+2. The third section is the area between these two circles.
+
+3. The fourth section is the largest, covering the entire left area.
+
+Upon pressing them, the display shows the following messages:
+
+1. Releasing without moving away will activate them.
+
+2. It indicates the elapsed time since the last training session.
+
+3. It shows the number of active classes.
\ No newline at end of file
diff --git a/projects/app_hand_gesture_classifier/app.yaml b/projects/app_hand_gesture_classifier/app.yaml
new file mode 100644
index 00000000..02e21c64
--- /dev/null
+++ b/projects/app_hand_gesture_classifier/app.yaml
@@ -0,0 +1,14 @@
+id: gesture_classifier
+name: Gesture Classifier
+name[zh]: 手势分类
+version: 1.0.0
+author: Taorye@Sipeed
+icon: icon.png
+desc: Classify the hand gesture.
+files:
+ - app.yaml
+ - icon.png
+ - main.py
+ - LinearSVC.py
+ - clf_dump.npz
+ - trainSets.npz
diff --git a/projects/app_hand_gesture_classifier/clf_dump.npz b/projects/app_hand_gesture_classifier/clf_dump.npz
new file mode 100644
index 00000000..ee14b0fb
Binary files /dev/null and b/projects/app_hand_gesture_classifier/clf_dump.npz differ
diff --git a/projects/app_hand_gesture_classifier/icon.png b/projects/app_hand_gesture_classifier/icon.png
new file mode 100644
index 00000000..97bdf86b
Binary files /dev/null and b/projects/app_hand_gesture_classifier/icon.png differ
diff --git a/projects/app_hand_gesture_classifier/main.py b/projects/app_hand_gesture_classifier/main.py
new file mode 100644
index 00000000..414c5439
--- /dev/null
+++ b/projects/app_hand_gesture_classifier/main.py
@@ -0,0 +1,169 @@
+from maix import camera, display, image, nn, app, time, touchscreen
+
+detector = nn.HandLandmarks(model="/root/models/hand_landmarks.mud")
+# detector = nn.HandLandmarks(model="/root/models/hand_landmarks_bf16.mud")
+landmarks_rel = False
+
+cam = camera.Camera(320, 224, detector.input_format())
+disp = display.Display()
+
+ts = touchscreen.TouchScreen()
+
+# Loading screen
+img = cam.read()
+img.draw_string(100, 112, "Loading...\nwait up to 10s", color = image.COLOR_GREEN)
+disp.show(img)
+
+
+# Train LinearSVC
+import numpy as np
+
+from LinearSVC import LinearSVC, LinearSVCManager
+
+from contextlib import contextmanager
+@contextmanager
+def timer(name):
+ import time
+ result = {'passed': 0.} # 使用字典存储结果
+ start = time.time()
+ yield result
+ end = time.time()
+ passed = end - start
+ result['passed'] = passed
+ print(f"{name} 耗时: {passed:.6f} 秒")
+
+print("hello")
+name_classes = ("one", "five", "fist", "ok", "heartSingle", "yearh", "three", "four", "six", "Iloveyou", "gun", "thumbUp", "nine", "pink")
+last_train_time = 0
+
+npzfile = np.load("trainSets.npz")
+X_train = npzfile["X"]
+y_train = npzfile["y"]
+assert(len(X_train) == len(y_train))
+print(f"X_train_len: {len(X_train)}")
+
+# print(f"X_train: {X_train[0]}")
+# print(f"y_train: {y_train[0]}")
+
+if 0:
+ with timer("加载") as r:
+ clfm = LinearSVCManager(LinearSVC.load("clf_dump.npz"), X_train, y_train, pretrained=True)
+ last_train_time = r['passed']
+else:
+ # 创建掩码(mask)
+ mask_lt_4 = y_train < 4 # 小于 4 的掩码
+ mask_ge_4 = y_train >= 4 # 大于等于 4 的掩码
+ with timer("训练前半部分") as r:
+ clfm = LinearSVCManager(LinearSVC(C=1.0, learning_rate=0.01, max_iter=100), X_train[mask_lt_4], y_train[mask_lt_4])
+ last_train_time = r['passed']
+ # with timer("训练后半部分") as r:
+ # clfm.add(X_train[mask_ge_4], y_train[mask_ge_4])
+ # last_train_time = r['passed']
+
+# print(f"_W: {clfm.clf._W[0]}")
+# print(f"_b: {clfm.clf._B}")
+
+with timer("回归"):
+ labels, confs = clfm.test(clfm.samples[0])
+ recall_count = len(clfm.samples[1])
+ right_count = np.sum(labels == clfm.samples[1])
+ print(f"right/all= {right_count}/{recall_count}, acc: {right_count/recall_count}")
+# clfm.clf.save("maix_clf_dump.npz")
+
+print(type(clfm.clf))
+# clf1 = LinearSVC.load("clf_dump.npz")
+# print(f"{clfm.clf._W-clf1._W}, {clfm.clf._B-clf1._B}")
+
+def preprocess(hand_landmarks, is_left=False, boundary=(1,1,1)):
+ hand_landmarks = np.array(hand_landmarks).reshape((21, -1))
+ vector = hand_landmarks[:,:2]
+ vector = vector[1:] - vector[0]
+ vector = vector.astype('float64') / boundary[:vector.shape[1]]
+ if not is_left: # mirror
+ vector[:,0] *= -1
+ return vector
+
+
+
+
+# main loop
+class_nums_changing = False
+while not app.need_exit():
+ img = cam.read()
+ objs = detector.detect(img, conf_th = 0.7, iou_th = 0.45, conf_th2 = 0.8, landmarks_rel = landmarks_rel)
+ for obj in objs:
+ time_start = time.time_us()
+ hand_landmarks = preprocess(obj.points[8:8+21*3], obj.class_id == 0, (img.width(), img.height(), 1))
+ features = np.array([hand_landmarks.flatten()])
+ class_idx, pred_conf = clfm.test(features) # 获取预测类别
+ time_predict = time.time_us()
+ class_idx, pred_conf = class_idx[0], pred_conf[0]
+
+ # img.draw_rect(obj.x, obj.y, obj.w, obj.h, color = image.COLOR_RED)
+ msg = f'{detector.labels[obj.class_id]}: {obj.score:.2f}\n{name_classes[class_idx]}({class_idx})={pred_conf*100:.2f}%\n{time_predict-time_start}us'
+ img.draw_string(obj.points[0], obj.points[1], msg, color = image.COLOR_RED if obj.class_id == 0 else image.COLOR_GREEN, scale = 1.4, thickness = 2)
+ detector.draw_hand(img, obj.class_id, obj.points, 4, 10, box=True)
+ if landmarks_rel:
+ img.draw_rect(0, 0, detector.input_width(detect=False), detector.input_height(detect=False), color = image.COLOR_YELLOW)
+ for i in range(21):
+ x = obj.points[8 + 21*3 + i * 2]
+ y = obj.points[8 + 21*3 + i * 2 + 1]
+ img.draw_circle(x, y, 3, color = image.COLOR_YELLOW)
+
+
+ current_n_classes = len(clfm.clf.classes)
+
+ get_color = lambda n: image.COLOR_GREEN if current_n_classes == n else image.COLOR_RED
+ img.draw_circle(300, 20, 30, color = get_color(14))
+ img.draw_string(300-22, 20-18, "class 14", color = get_color(14))
+ img.draw_circle(300, 224-1-20, 30, color = get_color(4))
+ img.draw_string(300-22, 224-1-20-18, "class 4", color = get_color(4))
+ x, y = 0, 0
+ x, y, preesed = ts.read()
+ x = int(x / disp.width() * img.width())
+ y = int(y / disp.height() * img.height())
+ if x >= 300-30:
+ # if preesed:
+ # print(f"x, y = {x}, {y}, {preesed}")
+ if y <= 20+30:
+ if preesed:
+ if not class_nums_changing and current_n_classes == 4:
+ class_nums_changing = True
+ if class_nums_changing:
+ img.draw_string(30, 112, "Release to upgrade to class 14\n and please wait.", color = image.COLOR_RED)
+ else:
+ if class_nums_changing:
+ class_nums_changing = False
+ with timer("训练后半部分") as r:
+ mask_lt_4 = y_train < 4 # 小于 4 的掩码
+ mask_ge_4 = y_train >= 4 # 大于等于 4 的掩码
+ clfm.add(X_train[mask_ge_4], y_train[mask_ge_4])
+ last_train_time = r['passed']
+ print("success changed to 14")
+ elif y >= 224-1-20-30:
+ if preesed:
+ if not class_nums_changing and current_n_classes == 14:
+ class_nums_changing = True
+ if class_nums_changing:
+ img.draw_string(30, 112, "Release to retrain to class 4\n and please wait.", color = image.COLOR_RED)
+ else:
+ if class_nums_changing:
+ class_nums_changing = False
+ with timer("移除后半部分") as r:
+ mask_lt_4 = y_train < 4 # 小于 4 的掩码
+ mask_ge_4 = clfm.samples[1] >= 4 # 大于等于 4 的掩码
+ indices_ge_4 = np.where(mask_ge_4)[0]
+ clfm.rm(indices_ge_4)
+ # clfm = LinearSVCManager(LinearSVC(C=1.0, learning_rate=0.01, max_iter=100), X_train[mask_lt_4], y_train[mask_lt_4])
+ last_train_time = r['passed']
+ print("success changed to 4")
+ elif preesed:
+ img.draw_string(30, 112, "Press Red circle to make it\n Green(active).", color = image.COLOR_RED)
+
+ img.draw_string(0, 0, f'last_train_time= {last_train_time:.6f}s', color = image.COLOR_GREEN)
+ elif preesed:
+ img.draw_string(30, 112, "Press Red circle to make it\n Green(active).", color = image.COLOR_RED)
+
+ img.draw_string(0, 0, ','.join(name_classes[:4]), color = image.COLOR_GREEN)
+ img.draw_string(0, 20, '\n'.join(name_classes[4:]), color = image.COLOR_YELLOW if current_n_classes == 4 else image.COLOR_GREEN)
+ disp.show(img)
diff --git a/projects/app_hand_gesture_classifier/trainSets.npz b/projects/app_hand_gesture_classifier/trainSets.npz
new file mode 100644
index 00000000..4d2a30b2
Binary files /dev/null and b/projects/app_hand_gesture_classifier/trainSets.npz differ