首页
壁纸
留言板
友链
更多
统计归档
Search
1
TensorBoard:训练日志及网络结构可视化工具
12,686 阅读
2
主板开机跳线接线图【F_PANEL接线图】
9,790 阅读
3
移动光猫获取超级密码&开启公网ipv6
8,456 阅读
4
Linux使用V2Ray 原生客户端
7,522 阅读
5
NVIDIA 显卡限制功率
3,487 阅读
好物分享
实用教程
linux使用
wincmd
学习笔记
mysql
java学习
nginx
综合面试题
大数据
网络知识
linux
k8s
放码过来
python
javascript
java
opencv
蓝桥杯
leetcode
深度学习
开源模型
相关知识
数据集和工具
模型轻量化
语音识别
计算机视觉
杂七杂八
硬件科普
主机安全
嵌入式设备
其它
bug处理
登录
/
注册
Search
标签搜索
好物分享
学习笔记
linux
MySQL
nvidia
typero
内网穿透
webdav
vps
java
cudann
gcc
cuda
树莓派
CNN
图像去雾
ssh安全
nps
暗通道先验
阿里云
jupiter
累计撰写
359
篇文章
累计收到
119
条评论
首页
栏目
好物分享
实用教程
linux使用
wincmd
学习笔记
mysql
java学习
nginx
综合面试题
大数据
网络知识
linux
k8s
放码过来
python
javascript
java
opencv
蓝桥杯
leetcode
深度学习
开源模型
相关知识
数据集和工具
模型轻量化
语音识别
计算机视觉
杂七杂八
硬件科普
主机安全
嵌入式设备
其它
bug处理
页面
壁纸
留言板
友链
统计归档
搜索到
359
篇与
的结果
2023-06-05
Win磁盘被写保护解除方法
1.操作步骤1、首先按“win+x”命令,找到Windows powershell(管理员)(A),打开命令提示符,执行diskpart命令。Windows PowerShell 版权所有 (C) Microsoft Corporation。保留所有权利。 尝试新的跨平台 PowerShell https://aka.ms/pscore6 PS C:\WINDOWS\system32> diskpart Microsoft DiskPart 版本 10.0.19041.964 Copyright (C) Microsoft Corporation. 在计算机上: DESKTOP-80KQHVC DISKPART>2、在diskpart命令的界面,执行list disk命令,查看列出系统中所有的硬盘,并获取其硬盘号(如磁盘0)。DISKPART> list disk 磁盘 ### 状态 大小 可用 Dyn Gpt -------- ------------- ------- ------- --- --- 磁盘 0 联机 238 GB 1024 KB *3、通过diskpart命令的select操作关联要操作的硬盘,这里仅有一个磁盘0,我们以0号硬盘为例。select disk 04、若是不知道哪个硬盘对应相应的硬盘号或者想确认硬盘的状态,可以通过attributes disk操作来查看关联的硬盘属性信息,其中“只读”属性就是表示的写保护,英文为:readonly,如果状态为“是”表示 有写保护,如果为“否”表示没有写保护。DISKPART> select disk 0 磁盘 0 现在是所选磁盘。5、如果有写保护,通过执行如下命令清除写保护属性即可。DISKPART>attributes disk clear readonly 说明: attributes:是属性操作 disk:指的硬盘 clear:清除 readonly:只读属性,也就是写保护6、清除完成,再次查看一下属性,就发现只读属性已经为否了,现在硬盘就可以正常定入文件了。DISKPART> attributes disk 当前只读状态: 否 只读: 否 启动磁盘: 是 页面文件磁盘: 是 休眠文件磁盘: 否 故障转储磁盘: 是 群集磁盘 : 否
2023年06月05日
360 阅读
0 评论
0 点赞
2023-05-28
yolov5-v6.0测速
1.树莓派4Byolov5s(base) pi@raspberrypi:/data/yolov5-6.0 $ python detect.py --source test.mp4 --weight yolov5s.pt /home/pi/miniconda3/lib/python3.7/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: warn(f"Failed to load image Python extension: {e}") detect: weights=['yolov5s.pt'], source=test.mp4, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False YOLOv5 2021-10-12 torch 1.12.0 CPU Fusing layers... Model Summary: 213 layers, 7225885 parameters, 0 gradients /home/pi/miniconda3/lib/python3.7/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /root/pytorch/aten/src/ATen/native/TensorShape.cpp:2894.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] video 1/1 (1/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 2 trucks, Done. (0.791s) video 1/1 (2/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.741s) video 1/1 (3/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.731s) video 1/1 (4/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.731s) video 1/1 (5/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.731s) video 1/1 (6/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.725s) video 1/1 (7/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.716s) video 1/1 (8/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.732s) video 1/1 (9/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.731s) video 1/1 (10/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.752s)yolov5n(base) pi@raspberrypi:/data/yolov5-6.0 $ python detect.py --source test.mp4 --weight yolov5n.pt /home/pi/miniconda3/lib/python3.7/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: warn(f"Failed to load image Python extension: {e}") detect: weights=['yolov5n.pt'], source=test.mp4, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False YOLOv5 2021-10-12 torch 1.12.0 CPU Fusing layers... Model Summary: 213 layers, 1867405 parameters, 0 gradients /home/pi/miniconda3/lib/python3.7/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /root/pytorch/aten/src/ATen/native/TensorShape.cpp:2894.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] video 1/1 (1/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.390s) video 1/1 (2/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 truck, Done. (0.379s) video 1/1 (3/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 truck, Done. (0.358s) video 1/1 (4/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.367s) video 1/1 (5/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.353s) video 1/1 (6/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 truck, Done. (0.358s) video 1/1 (7/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.352s) video 1/1 (8/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.353s) video 1/1 (9/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 truck, Done. (0.361s) video 1/1 (10/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.352s)2.Jetson AGX Xavieryolov5s(base-jupiter) nvidia@xavier:/data/yolov5-6.0$ python detect.py --source test.mp4 --weight yolov5s.pt detect: weights=['yolov5s.pt'], source=test.mp4, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False YOLOv5 2021-10-12 torch 1.10.0 CUDA:0 (Xavier, 31920.45703125MB) Fusing layers... /home/nvidia/archiconda3/envs/base-jupiter/lib/python3.6/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /media/nvidia/NVME/pytorch/pytorch-v1.10.0/aten/src/ATen/native/TensorShape.cpp:2157.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] Model Summary: 213 layers, 7225885 parameters, 0 gradients, 16.5 GFLOPs video 1/1 (1/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 2 trucks, Done. (0.071s) video 1/1 (2/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.064s) video 1/1 (3/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.064s) video 1/1 (4/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.063s) video 1/1 (5/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.064s) video 1/1 (6/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.063s) video 1/1 (7/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.064s) video 1/1 (8/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.064s) video 1/1 (9/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.063s) video 1/1 (10/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.063s) video 1/1 (11/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.064s) video 1/1 (12/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.063s) video 1/1 (13/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 2 trucks, Done. (0.064s) video 1/1 (14/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.064s) video 1/1 (15/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.064s) video 1/1 (16/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.063s) video 1/1 (17/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 2 trucks, Done. (0.064s) video 1/1 (18/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 2 trucks, Done. (0.063s) video 1/1 (19/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 2 trucks, Done. (0.064s) video 1/1 (20/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 1 truck, Done. (0.063s)yolov5n(base-jupiter) nvidia@xavier:/data/yolov5-6.0$ python detect.py --source test.mp4 --weight yolov5n.pt detect: weights=['yolov5n.pt'], source=test.mp4, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False YOLOv5 2021-10-12 torch 1.10.0 CUDA:0 (Xavier, 31920.45703125MB) Fusing layers... /home/nvidia/archiconda3/envs/base-jupiter/lib/python3.6/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /media/nvidia/NVME/pytorch/pytorch-v1.10.0/aten/src/ATen/native/TensorShape.cpp:2157.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] Model Summary: 213 layers, 1867405 parameters, 0 gradients, 4.5 GFLOPs video 1/1 (1/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.039s) video 1/1 (2/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 truck, Done. (0.030s) video 1/1 (3/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 truck, Done. (0.030s) video 1/1 (4/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.030s) video 1/1 (5/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.029s) video 1/1 (6/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 truck, Done. (0.030s) video 1/1 (7/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.029s) video 1/1 (8/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.030s) video 1/1 (9/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 truck, Done. (0.030s) video 1/1 (10/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.030s) video 1/1 (11/985) /data/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.029s) video 1/1 (12/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 truck, Done. (0.030s) video 1/1 (13/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 bus, 1 truck, Done. (0.030s) video 1/1 (14/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 bus, 1 truck, Done. (0.029s) video 1/1 (15/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 bus, 1 truck, Done. (0.030s) video 1/1 (16/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 bus, 1 truck, Done. (0.030s) video 1/1 (17/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 bus, 1 truck, Done. (0.030s) video 1/1 (18/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 bus, 1 truck, Done. (0.029s) video 1/1 (19/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 bus, 1 truck, Done. (0.030s) video 1/1 (20/985) /data/yolov5-6.0/test.mp4: 384x640 1 car, 1 bus, 1 truck, Done. (0.030s) 3.Jetson Xavier NXyolov5s(base-jupiter) nvidia@nx:/data_jupiter/yolov5-6.0$ python detect.py --source test.mp4 --weight yolov5s.pt detect: weights=['yolov5s.pt'], source=test.mp4, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False YOLOv5 2023-5-28 torch 1.10.0 CUDA:0 (Xavier, 7765.4140625MB) Fusing layers... /home/nvidia/archiconda3/envs/base-jupiter/lib/python3.6/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /media/nvidia/NVME/pytorch/pytorch-v1.10.0/aten/src/ATen/native/TensorShape.cpp:2157.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] Model Summary: 213 layers, 7225885 parameters, 0 gradients, 16.5 GFLOPs video 1/1 (1/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 2 trucks, Done. (0.061s) video 1/1 (2/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.043s) video 1/1 (3/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.040s) video 1/1 (4/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.040s) video 1/1 (5/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.040s) video 1/1 (6/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.040s) video 1/1 (7/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.040s) video 1/1 (8/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.040s) video 1/1 (9/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.040s) video 1/1 (10/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.040s) video 1/1 (11/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.040s) video 1/1 (12/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.040s) video 1/1 (13/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 2 trucks, Done. (0.040s) video 1/1 (14/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.040s) video 1/1 (15/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.040s) video 1/1 (16/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 3 trucks, Done. (0.040s) video 1/1 (17/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 2 trucks, Done. (0.040s) video 1/1 (18/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 2 trucks, Done. (0.040s) video 1/1 (19/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 2 trucks, Done. (0.040s) video 1/1 (20/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 2 airplanes, 1 truck, Done. (0.040s)yolov5n(base-jupiter) nvidia@nx:/data_jupiter/yolov5-6.0$ python detect.py --source test.mp4 --weight yolov5n.pt detect: weights=['yolov5n.pt'], source=test.mp4, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False YOLOv5 2023-5-28 torch 1.10.0 CUDA:0 (Xavier, 7765.4140625MB) Fusing layers... /home/nvidia/archiconda3/envs/base-jupiter/lib/python3.6/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /media/nvidia/NVME/pytorch/pytorch-v1.10.0/aten/src/ATen/native/TensorShape.cpp:2157.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] Model Summary: 213 layers, 1867405 parameters, 0 gradients, 4.5 GFLOPs video 1/1 (1/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.049s) video 1/1 (2/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 car, 1 truck, Done. (0.032s) video 1/1 (3/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 car, 1 truck, Done. (0.032s) video 1/1 (4/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.033s) video 1/1 (5/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.032s) video 1/1 (6/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 car, 1 truck, Done. (0.032s) video 1/1 (7/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.032s) video 1/1 (8/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.032s) video 1/1 (9/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 car, 1 truck, Done. (0.032s) video 1/1 (10/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.032s) video 1/1 (11/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 person, 1 car, 1 truck, Done. (0.032s) video 1/1 (12/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 car, 1 truck, Done. (0.032s) video 1/1 (13/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 car, 1 bus, 1 truck, Done. (0.032s) video 1/1 (14/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 car, 1 bus, 1 truck, Done. (0.033s) video 1/1 (15/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 car, 1 bus, 1 truck, Done. (0.032s) video 1/1 (16/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 car, 1 bus, 1 truck, Done. (0.033s) video 1/1 (17/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 car, 1 bus, 1 truck, Done. (0.033s) video 1/1 (18/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 car, 1 bus, 1 truck, Done. (0.032s) video 1/1 (19/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 car, 1 bus, 1 truck, Done. (0.032s) video 1/1 (20/985) /data_jupiter/yolov5-6.0/test.mp4: 384x640 1 car, 1 bus, 1 truck, Done. (0.032s)4.Jetson Nanoyolov5s#TODOyolov5n#TODO汇总设备名Yolov5s测速Yolov5n测速CPUGPU显存成本/价格树莓派4B731ms/1.37FPS352ms/2.84FPS4 核ARM A72 @ 1.5 GHz无无约850Jetson Nano161ms/6.21FPS89ms/11.24FPS4 核ARM A57 @ 1.43 GHz128核Maxwell4GB 64 位 LPDDR4x25.6GB/s约1300Jetson Xavier NX40ms/25FPS32ms/31.25FPS6 核 NVIDIA Carmel ARM®v8.2 64 位 CPU6MB L2 + 4MB L348 个 Tensor Core+384 个 NVIDIA CUDA Core Volta™ GPU8 GB 128 位 LPDDR4x59.7GB/s约4500Jetson AGX Xavier64ms/15.63FPS29ms/34.48FPS8 核 NVIDIA Carmel Armv8.2 64 位 CPU8MB L2 + 4MB L364 个 Tensor Core+512 个 NVIDIA CUDA Core Volta™ GPU32GB 256 位 LPDDR4x136.5GB/秒约10000参考资料NVIDIA Jetson 嵌入式系统开发者套件和模组
2023年05月28日
242 阅读
0 评论
1 点赞
2023-05-17
python-opencv限制图片长宽实现图片压缩
1.业务背景图片压缩前的预处理,适配手机端图片大小显示且尽可能的减少图片存储空间的占用。通过限制图片的最大宽度和最大高度来减少图片的大小。2.核心代码import cv2 import os import shutil import math def img_compress_by_openCV(img_path_local,img_size_thresh = 300,max_height=2560,max_width=1440): # 压缩前图片大小 img_src_size = os.path.getsize(img_path_local)/1024 # 压缩后图片保存地址 img_path_compress = "./images/opencvs_"+img_path_local.split("/")[-1] # 若压缩前图片大小已经大小阈值img_size_thresh则跳过压缩 if(img_src_size<img_size_thresh): print("图片大小小于"+str(img_size_thresh)+"KB,跳过压缩"); shutil.copyfile(img_path_local,img_path_compress) else: print("openCV压缩前图片大小:"+str(int(img_src_size))+"KB") # 计算压缩比 img = cv2.imread(img_path_local) heigh, width = img.shape[:2] print("openCV压缩前图片尺寸(heigh, width)=:("+str(int(heigh))+","+str(int(width))+")") compress_rate = min(max_height/heigh,max_width/width,1) # 调用openCV进行图片压缩 img_compress = cv2.resize(img, (int(heigh*compress_rate), int(width*compress_rate)),interpolation=cv2.INTER_AREA) # 双三次插值 cv2.imwrite(img_path_compress, img_compress) # 压缩后图片大小 img_compress_size = os.path.getsize(img_path_compress)/1024 print("openCV压缩后图片大小:"+str(int(img_compress_size))+"KB") print("openCV压缩前图片尺寸(heigh, width)=:("+str(heigh*compress_rate)+","+str(int(width*compress_rate))+")") return img_path_compress img_path_local = "./images/1684155324391.jpg" img_path_compress = img_compress_by_openCV(img_path_local)运行结果openCV压缩前图片大小:2219KB openCV压缩前图片尺寸(heigh, width)=:(4000,3000) openCV压缩后图片大小:469KB openCV压缩前图片尺寸(heigh, width)=:(1920.0,1440)
2023年05月17日
514 阅读
0 评论
0 点赞
2023-05-15
python调用TinyPNG进行图片无损压缩
1.TinyPNG介绍TinyPNG是一种在线图片压缩工具,可以将图片压缩到更小的文件大小,而且不会对图片质量造成明显的影响。其实现原理主要是基于两个方面:压缩算法和颜色减少。压缩算法TinyPNG使用的是一种叫做Deflate的压缩算法。Deflate算法是一种无损压缩算法,可以将图片的二进制数据进行压缩,从而减小图片文件的大小。在Deflate算法中,压缩的主要思想是利用重复的数据进行替换,从而减小文件的大小。具体来说,Deflate算法主要包括两个步骤:压缩和解压缩。在压缩过程中,数据被分成多个块,并且每个块都有自己的压缩字典。在解压缩过程中,压缩字典用于还原压缩后的数据。2. 颜色减少另一个TinyPNG使用的技术是颜色减少。颜色减少是一种通过减少图片中使用的颜色数来减小文件大小的技术。在实践中,很多图片中使用的颜色实际上是不必要的,因此可以通过将这些颜色删除来减小文件的大小。具体来说,TinyPNG会先对图片进行一个预处理,找出图片中使用频率最低的颜色,并将其替换成使用频率更高的颜色。这个过程是基于一个叫做K-means的算法实现的。K-means算法是一种基于聚类的算法,可以将图片中使用的颜色分成多个聚类,从而找出使用频率最低的颜色。2.python调用TinyPNG API进行图片压缩安装依赖pip install tinify核心代码import tinify import os import shutil def img_compress_by_tinify(img_path_local,img_size_thresh = 200): if not os.path.exists("./images"): os.makedirs("./images") # 压缩前图片大小 img_src_size = os.path.getsize(img_path_local)/1024 # 压缩后图片保存地址 img_path_compress = "./images/compress_"+img_path_local.split("/")[-1] # 若压缩前图片大小已经大小阈值img_size_thresh则跳过压缩 if(img_src_size<img_size_thresh): print("图片大小小于"+str(img_size_thresh)+"KB,跳过压缩"); shutil.copyfile(img_path_local,img_path_compress) else: print("压缩前图片大小:"+str(int(img_src_size))+"KB") # 调用tinyPNG进行图片压缩 tinify.key = "V02hTQyPz4WRXPyCChGv6nJJTZYVtzcd" source = tinify.from_file(img_path_local) source.to_file(img_path_compress) # 压缩后图片大小 img_compress_size = os.path.getsize(img_path_compress)/1024 print("压缩后图片大小:"+str(int(img_compress_size))+"KB") return img_path_compress img_path_local = "./images/1684153992017.jpg" img_path_compress = img_compress_by_tinify(img_path_local) print(img_path_compress)调用结果压缩前图片大小:693KB 压缩后图片大小:148KB ./images/compress_1684153992017.jpg
2023年05月15日
645 阅读
0 评论
0 点赞
2023-05-15
python上传文件到阿里云oss
安装依赖包pip install oss2核心代码import oss2 access_key_id = 'LTA*******************' access_key_secret = 'ZAx*******************************' bucket_name = 'caucshop' endpoint = 'oss-cn-beijing.aliyuncs.com' # 创建bucket对象 bucket = oss2.Bucket(oss2.Auth(access_key_id, access_key_secret), endpoint, bucket_name) # 待上传的文件路径 file_path_local = "./Snipaste_2023-05-13_18-54-02.jpg" # 上传到oss后保保存的路径 file_path_oss = "goodImgsCompresss/"+file_path_local.split("/")[-1] # 读取文件 with open(file_path_local, "rb") as f: data = f.read() # 上传文件 bucket.put_object(file_path_oss, data) # 获取文件的url file_url_oss = "https://"+bucket_name+"."+endpoint+"/"+file_path_oss; print(file_url_oss)执行结果,得到文件在oss中的存储地址,我这里采用的是公共读的权限https://caucshop.oss-cn-beijing.aliyuncs.com/goodImgsCompresss/Snipaste_2023-05-13_18-54-02.jpg参考资料【python】 文件/图片上传 阿里云OSS ,获取外网链接 实例_oss图片外链_维玉的博客-CSDN博客
2023年05月15日
421 阅读
0 评论
0 点赞
2023-05-10
SpringBoot调用阿里云内容审核API实现文本和图片审核
1.服务开通地址:https://vision.aliyun.com/imageaudit?spm=5176.11065253.1411203.3.7e8153f6mehjzV2.引入公共POM依赖<!--json转换依赖--> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> <version>2.0.25</version> </dependency> <!--文字内容审核依赖及图片审核依赖共用--> <dependency> <groupId>com.aliyun</groupId> <artifactId>imageaudit20191230</artifactId> <version>2.0.6</version> </dependency>3.文本审核3.1 核心代码private static final String accessKeyId = "<your-access-key-id>"; private static final String accessKeySecret = "<your-access-key-secret>"; @PostMapping("/scanText") public String scanText(@RequestBody HashMap<String,String> reqMap) throws Exception { // 获取待检测的文字 String text = reqMap.get("text"); System.out.println("text="+text); // 返回结果的变量 Map<String,String> resMap = new HashMap<>(); //实例化客户端 Config config = new Config() // 必填,您的 AccessKey ID .setAccessKeyId(accessKeyId) // 必填,您的 AccessKey Secret .setAccessKeySecret(accessKeySecret); config.endpoint = "imageaudit.cn-shanghai.aliyuncs.com"; Client client = new Client(config); /** * spam:文字垃圾内容识别 * politics:文字敏感内容识别 * abuse:文字辱骂内容识别 * terrorism:文字暴恐内容识别 * porn:文字鉴黄内容识别 * flood:文字灌水内容识别 * contraband:文字违禁内容识别 * ad:文字广告内容识别 */ // 设置待检测类型 ScanTextRequest.ScanTextRequestLabels labels0 = new ScanTextRequest.ScanTextRequestLabels() .setLabel("politics"); ScanTextRequest.ScanTextRequestLabels labels1 = new ScanTextRequest.ScanTextRequestLabels() .setLabel("contraband"); ScanTextRequest.ScanTextRequestLabels labels2 = new ScanTextRequest.ScanTextRequestLabels() .setLabel("terrorism"); ScanTextRequest.ScanTextRequestLabels labels3 = new ScanTextRequest.ScanTextRequestLabels() .setLabel("abuse"); ScanTextRequest.ScanTextRequestLabels labels4 = new ScanTextRequest.ScanTextRequestLabels() .setLabel("spam"); ScanTextRequest.ScanTextRequestLabels labels5 = new ScanTextRequest.ScanTextRequestLabels() .setLabel("ad"); // 设置待检测内容 ScanTextRequest.ScanTextRequestTasks tasks0 = new ScanTextRequest.ScanTextRequestTasks() .setContent(text); ScanTextRequest scanTextRequest = new ScanTextRequest() .setTasks(java.util.Arrays.asList( tasks0 )) .setLabels(java.util.Arrays.asList( labels0, labels1, labels2, labels3, labels4, labels5 )); RuntimeOptions runtime = new RuntimeOptions(); ScanTextResponse response = null; try { // 复制代码运行请自行打印 API 的返回值 response = client.scanTextWithOptions(scanTextRequest, runtime); resMap.put("data",JSON.toJSONString(response.getBody().getData().getElements().get(0).getResults())); //调用后获取到他的返回对象, 然后判断我们的文字 是什么内容 List<ScanTextResponseBody.ScanTextResponseBodyDataElementsResultsDetails> responseDetails = response.getBody().getData().getElements().get(0).getResults().get(0).getDetails(); if (responseDetails.size()>0){ resMap.put("state","block"); StringBuilder error = new StringBuilder("检测到:"); for (ScanTextResponseBody.ScanTextResponseBodyDataElementsResultsDetails detail : responseDetails) { if ("abuse".equals(detail.getLabel())) error.append("辱骂内容、"); if ("spam".equals(detail.getLabel())) error.append("垃圾内容、"); if ("politics".equals(detail.getLabel())) error.append("敏感内容、"); if ("terrorism".equals(detail.getLabel())) error.append("暴恐内容、"); if ("porn".equals(detail.getLabel())) error.append("黄色内容、"); if ("flood".equals(detail.getLabel())) error.append("灌水内容、"); if ("contraband".equals(detail.getLabel())) error.append("违禁内容、"); if ("ad".equals(detail.getLabel())) error.append("广告内容、"); } resMap.put("msg",error.toString()); return JSON.toJSONString(resMap); }else { resMap.put("state","pass"); resMap.put("msg","未检测出违规!"); return JSON.toJSONString(resMap); } } catch (Exception _error) { resMap.put("state","review"); resMap.put("msg","阿里云无法进行判断,需要人工进行审核,错误详情:"+_error); return JSON.toJSONString(resMap); } }3.2 调用结果req{ "text":"hello word! 卧槽6666" }res{ "state": "block", "msg": "检测到:辱骂内容、", "data": { "details": [{ "contexts": [{ "context": "卧槽" }], "label": "abuse" }], "label": "abuse", "rate": 99.91, "suggestion": "block" } }4.图片审核4.1 核心代码private static final String accessKeyId = "<your-access-key-id>"; private static final String accessKeySecret = "<your-access-key-secret>"; @PostMapping("/scanImage") public String scanImage(@RequestBody HashMap<String,String> reqMap) throws Exception { // 获取待检测的文字 String image = reqMap.get("image"); System.out.println("image="+image); // 返回结果的变量 Map<String,String> resMap = new HashMap<>(); //实例化客户端 Config config = new Config() // 必填,您的 AccessKey ID .setAccessKeyId(accessKeyId) // 必填,您的 AccessKey Secret .setAccessKeySecret(accessKeySecret); config.endpoint = "imageaudit.cn-shanghai.aliyuncs.com"; Client client = new Client(config); // 设置待检测内容 ScanImageRequest.ScanImageRequestTask task0 = new ScanImageRequest.ScanImageRequestTask().setImageURL(image); // 封装检测请求 /** * porn:图片智能鉴黄 * terrorism:图片敏感内容识别、图片风险人物识别 * ad:图片垃圾广告识别 * live:图片不良场景识别 * logo:图片Logo识别 */ ScanImageRequest scanImageRequest = new ScanImageRequest() .setTask(java.util.Arrays.asList( task0 )) .setScene(java.util.Arrays.asList( "porn","terrorism","live" )); RuntimeOptions runtime = new RuntimeOptions(); // 调用API获取检测结果 ScanImageResponse response = client.scanImageWithOptions(scanImageRequest, runtime); resMap.put("data",JSON.toJSONString(response.getBody().getData().getResults().get(0))); // 检测结果解析 try { List<ScanImageResponseBody.ScanImageResponseBodyDataResultsSubResults> responseSubResults = response.getBody().getData().getResults().get(0).getSubResults(); for(ScanImageResponseBody.ScanImageResponseBodyDataResultsSubResults responseSubResult : responseSubResults){ if(responseSubResult.getSuggestion()!="pass"){ resMap.put("state",responseSubResult.getSuggestion()); String msg = ""; switch (responseSubResult.getLabel()){ case "porn": msg = "图片智能鉴黄未通过"; break; case "terrorism": msg = "图片敏感内容识别、图片风险人物识别未通过"; break; case "ad": msg = "图片垃圾广告识别未通过"; break; case "live": msg = "图片不良场景识别未通过"; break; case "logo": msg = "图片Logo识别未通过"; break; } return JSON.toJSONString(resMap); } } } catch (Exception error) { resMap.put("state","review"); resMap.put("msg","发生错误,详情:"+error); return JSON.toJSONString(resMap); } resMap.put("state","pass"); return JSON.toJSONString(resMap); }4.2 调用结果req{ "image":"https://jupite-aliyun.oss-cn-hangzhou.aliyuncs.com/second_hand_shop/client/img/goodImgs/1683068284289.jpg" }res{ "data": { "imageURL": "http://jupite-aliyun.oss-cn-hangzhou.aliyuncs.com/second_hand_shop/client/img/goodImgs/1683068284289.jpg", "subResults": [ { "label": "normal", "rate": 99.9, "scene": "porn", "suggestion": "pass" }, { "label": "normal", "rate": 99.88, "scene": "terrorism", "suggestion": "pass" }, { "label": "normal", "rate": 99.91, "scene": "live", "suggestion": "pass" } ] }, "state": "pass" }参考资料https://next.api.aliyun.com/api/imageaudit/2019-12-30/ScanImage阿里云文本检测 使用教程(Java)https://vision.aliyun.com/imageaudit?spm=5176.11065253.1411203.3.7e8153f6mehjzV
2023年05月10日
599 阅读
0 评论
0 点赞
2023-05-09
python文字转语音(可保存为mp3)
1.pyttsx3模块(亲测可用)这是一款优秀的文字转语音的模块,它生成的音频文件也比较具有个性化。可以调整声音的音量,频率,变声,当然设置方法都差不多,都是先拿到它对应功能的值然后在进行加减。下载pyttsx3模块pip install pyttsx3调用import pyttsx3 # 初始化 engine = pyttsx3.init(); # 设置个性化参数 engine.setProperty('rate',150) #调整语速 engine.setProperty('volume',2.0) #调整音量 voices = engine.getProperty('voices'); engine.setProperty('voice',voices[0].id); mp3_save_path = "./test.mp3" text="西游记" # 使用引擎进行渲染,亲测不能省略这一步,省略了就保存的文件为空 engine.say(text); engine.runAndWait(); #播放音频 # 保存到文件 engine.save_to_file(text,mp3_save_path);2.gtts模快(亲测没跑通)安装pip install gtts使用from gtts import gTTS mp3_save_path = "./test.mp3" text="西游记" # text:音频内容 # lang: 音频语言 tts = gTTS(text=text, lang='zh-tw') tts.save(mp3_save_path)
2023年05月09日
406 阅读
0 评论
0 点赞
2023-05-03
OpenVPN搭建:实现异地多设备组网
1.服务端一键安装脚本wget https://nos-public.nos-eastchina1.126.net/vpn -O openvpn-install.sh && bash openvpn-install.sh安装完毕后将 /root/ 下的 client.ovpn 文件,下载到本地导入 OpenVPN 客户端连接软件进行连接。注意:强烈建议不要使用默认的 VPN 端口 1194,建议修改为其他端口并防火墙开放该端口,使用云服务器的话还需要在安全组放行。还需要继续创建其他的client.ovpn文件继续执行该脚本即可。bash openvpn-install.sh2.客户端连接2.1 Linuxsudo apt-get install openvpn openvpn client.ovpn开机自启动服务sudo vim /etc/rc.local# exit 0 前面加入如下内容 /usr/sbin/openvpn --config /root/client.ovpn &2.2 windows使用客户端下载地址:https://nos-public.nos-eastchina1.126.net/openvpn-install-2.4.6-I602.exe参考资料搭建 OpenVPN脚本内容备份openvpn-install.sh#!/bin/bash # OpenVPN road warrior installer for Debian, Ubuntu and CentOS # This script will work on Debian, Ubuntu, CentOS and probably other distros # of the same families, although no support is offered for them. It isn't # bulletproof but it will probably work if you simply want to setup a VPN on # your Debian/Ubuntu/CentOS box. It has been designed to be as unobtrusive and # universal as possible. # Detect Debian users running the script with "sh" instead of bash if readlink /proc/$$/exe | grep -qs "dash"; then echo "This script needs to be run with bash, not sh" exit 1 fi if [[ "$EUID" -ne 0 ]]; then echo "Sorry, you need to run this as root" exit 2 fi if [[ ! -e /dev/net/tun ]]; then echo "The TUN device is not available You need to enable TUN before running this script" exit 3 fi if grep -qs "CentOS release 5" "/etc/redhat-release"; then echo "CentOS 5 is too old and not supported" exit 4 fi if [[ -e /etc/debian_version ]]; then OS=debian GROUPNAME=nogroup RCLOCAL='/etc/rc.local' elif [[ -e /etc/centos-release || -e /etc/redhat-release ]]; then OS=centos GROUPNAME=nobody RCLOCAL='/etc/rc.d/rc.local' else echo "Looks like you aren't running this installer on Debian, Ubuntu or CentOS" exit 5 fi newclient () { # Generates the custom client.ovpn cp /etc/openvpn/client-common.txt ~/$1.ovpn echo "<ca>" >> ~/$1.ovpn cat /etc/openvpn/easy-rsa/pki/ca.crt >> ~/$1.ovpn echo "</ca>" >> ~/$1.ovpn echo "<cert>" >> ~/$1.ovpn cat /etc/openvpn/easy-rsa/pki/issued/$1.crt >> ~/$1.ovpn echo "</cert>" >> ~/$1.ovpn echo "<key>" >> ~/$1.ovpn cat /etc/openvpn/easy-rsa/pki/private/$1.key >> ~/$1.ovpn echo "</key>" >> ~/$1.ovpn echo "<tls-auth>" >> ~/$1.ovpn cat /etc/openvpn/ta.key >> ~/$1.ovpn echo "</tls-auth>" >> ~/$1.ovpn } # Try to get our IP from the system and fallback to the Internet. # I do this to make the script compatible with NATed servers (lowendspirit.com) # and to avoid getting an IPv6. IP=$(ip addr | grep 'inet' | grep -v inet6 | grep -vE '127\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | grep -o -E '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | head -1) VPCSUBNETDEFAULT=$(route -n | grep eth0 | grep 255.255| awk '{print $1,$3}' | tail -n 1| awk '{print $1}') VPCMASKDEFAULT=$(route -n | grep eth0 | grep 255.255| awk '{print $1,$3}' | tail -n 1| awk '{print $2}') if [[ "$IP" = "" ]]; then IP=$(wget -4qO- "http://whatismyip.akamai.com/") fi if [[ -e /etc/openvpn/server.conf ]]; then while : do clear echo "Looks like OpenVPN is already installed" echo "" echo "What do you want to do?" echo " 1) Add a new user" echo " 2) Revoke an existing user" echo " 3) Remove OpenVPN" echo " 4) Exit" read -p "Select an option [1-4]: " option case $option in 1) echo "" echo "Tell me a name for the client certificate" echo "Please, use one word only, no special characters" read -p "Client name: " -e -i netease-b CLIENT cd /etc/openvpn/easy-rsa/ ./easyrsa build-client-full $CLIENT nopass # Generates the custom client.ovpn newclient "$CLIENT" echo "" echo "Client $CLIENT added, configuration is available at" ~/"$CLIENT.ovpn" exit ;; 2) # This option could be documented a bit better and maybe even be simplimplified # ...but what can I say, I want some sleep too NUMBEROFCLIENTS=$(tail -n +2 /etc/openvpn/easy-rsa/pki/index.txt | grep -c "^V") if [[ "$NUMBEROFCLIENTS" = '0' ]]; then echo "" echo "You have no existing clients!" exit 6 fi echo "" echo "Select the existing client certificate you want to revoke" tail -n +2 /etc/openvpn/easy-rsa/pki/index.txt | grep "^V" | cut -d '=' -f 2 | nl -s ') ' if [[ "$NUMBEROFCLIENTS" = '1' ]]; then read -p "Select one client [1]: " CLIENTNUMBER else read -p "Select one client [1-$NUMBEROFCLIENTS]: " CLIENTNUMBER fi CLIENT=$(tail -n +2 /etc/openvpn/easy-rsa/pki/index.txt | grep "^V" | cut -d '=' -f 2 | sed -n "$CLIENTNUMBER"p) cd /etc/openvpn/easy-rsa/ ./easyrsa --batch revoke $CLIENT EASYRSA_CRL_DAYS=3650 ./easyrsa gen-crl rm -rf pki/reqs/$CLIENT.req rm -rf pki/private/$CLIENT.key rm -rf pki/issued/$CLIENT.crt rm -rf /etc/openvpn/crl.pem cp /etc/openvpn/easy-rsa/pki/crl.pem /etc/openvpn/crl.pem # CRL is read with each client connection, when OpenVPN is dropped to nobody chown nobody:$GROUPNAME /etc/openvpn/crl.pem echo "" echo "Certificate for client $CLIENT revoked" exit ;; 3) echo "" read -p "Do you really want to remove OpenVPN? [y/n]: " -e -i n REMOVE if [[ "$REMOVE" = 'y' ]]; then PORT=$(grep '^port ' /etc/openvpn/server.conf | cut -d " " -f 2) PROTOCOL=$(grep '^proto ' /etc/openvpn/server.conf | cut -d " " -f 2) if pgrep firewalld; then IP=$(firewall-cmd --direct --get-rules ipv4 nat POSTROUTING | grep '\-s 10.8.0.0/24 '"'"'!'"'"' -d 10.8.0.0/24 -j SNAT --to ' | cut -d " " -f 10) # Using both permanent and not permanent rules to avoid a firewalld reload. firewall-cmd --zone=public --remove-port=$PORT/$PROTOCOL firewall-cmd --zone=trusted --remove-source=10.8.0.0/24 firewall-cmd --permanent --zone=public --remove-port=$PORT/$PROTOCOL firewall-cmd --permanent --zone=trusted --remove-source=10.8.0.0/24 firewall-cmd --direct --remove-rule ipv4 nat POSTROUTING 0 -s 10.8.0.0/24 ! -d 10.8.0.0/24 -j SNAT --to $IP firewall-cmd --permanent --direct --remove-rule ipv4 nat POSTROUTING 0 -s 10.8.0.0/24 ! -d 10.8.0.0/24 -j SNAT --to $IP else IP=$(grep 'iptables -t nat -A POSTROUTING -s 10.8.0.0/24 ! -d 10.8.0.0/24 -j SNAT --to ' $RCLOCAL | cut -d " " -f 14) iptables -t nat -D POSTROUTING -s 10.8.0.0/24 ! -d 10.8.0.0/24 -j SNAT --to $IP sed -i '/iptables -t nat -A POSTROUTING -s 10.8.0.0\/24 ! -d 10.8.0.0\/24 -j SNAT --to /d' $RCLOCAL if iptables -L -n | grep -qE '^ACCEPT'; then iptables -D INPUT -p $PROTOCOL --dport $PORT -j ACCEPT iptables -D FORWARD -s 10.8.0.0/24 -j ACCEPT iptables -D FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT sed -i "/iptables -I INPUT -p $PROTOCOL --dport $PORT -j ACCEPT/d" $RCLOCAL sed -i "/iptables -I FORWARD -s 10.8.0.0\/24 -j ACCEPT/d" $RCLOCAL sed -i "/iptables -I FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT/d" $RCLOCAL fi fi if hash sestatus 2>/dev/null; then if sestatus | grep "Current mode" | grep -qs "enforcing"; then if [[ "$PORT" != '1194' || "$PROTOCOL" = 'tcp' ]]; then semanage port -d -t openvpn_port_t -p $PROTOCOL $PORT fi fi fi if [[ "$OS" = 'debian' ]]; then apt-get remove --purge -y openvpn else yum remove openvpn -y fi rm -rf /etc/openvpn echo "" echo "OpenVPN removed!" else echo "" echo "Removal aborted!" fi exit ;; 4) exit;; esac done else clear echo 'Welcome to this quick OpenVPN "road warrior" installer' echo "" # OpenVPN setup and first user creation echo "I need to ask you a few questions before starting the setup" echo "You can leave the default options and just press enter if you are ok with them" echo "" echo "First I need to know the IPv4 address of the network interface you want OpenVPN" echo "listening to." read -p "IP address: " -e -i $IP IP echo "" echo "Which protocol do you want for OpenVPN connections?" echo " 1) UDP (recommended)" echo " 2) TCP" read -p "Protocol [1-2]: " -e -i 1 PROTOCOL case $PROTOCOL in 1) PROTOCOL=udp ;; 2) PROTOCOL=tcp ;; esac echo "" echo "What port do you want OpenVPN listening to?" read -p "Port: " -e -i 1194 PORT echo "" echo "what VPC subnet?" read -p "VPC subnet(E.g 172.16.0.0): " -e -i $VPCSUBNETDEFAULT VPCSUBNET echo "" echo "what VPC mask?" read -p "VPC mask(E.g 255.255.0.0): " -e -i $VPCMASKDEFAULT VPCMASK echo "" # echo "Which DNS do you want to use with the VPN?" # echo " 1) Current system resolvers" # echo " 2) Google" # echo " 3) OpenDNS" # echo " 4) NTT" # echo " 5) Hurricane Electric" # echo " 6) Verisign" # read -p "DNS [1-6]: " -e -i 1 DNS # echo "" echo "Finally, tell me your name for the client certificate" echo "Please, use one word only, no special characters" read -p "Client name: " -e -i netease-b CLIENT echo "" echo "Okay, that was all I needed. We are ready to setup your OpenVPN server now" read -n1 -r -p "Press any key to continue..." if [[ "$OS" = 'debian' ]]; then apt-get update apt-get install openvpn iptables openssl ca-certificates -y else # Else, the distro is CentOS yum install epel-release -y yum install openvpn iptables openssl wget ca-certificates -y fi # An old version of easy-rsa was available by default in some openvpn packages if [[ -d /etc/openvpn/easy-rsa/ ]]; then rm -rf /etc/openvpn/easy-rsa/ fi # Get easy-rsa wget -O ~/EasyRSA-3.0.4.tgz "https://nos-public.nos-eastchina1.126.net/EasyRSA-3.0.4.tgz" tar xzf ~/EasyRSA-3.0.4.tgz -C ~/ mv ~/EasyRSA-3.0.4/ /etc/openvpn/ mv /etc/openvpn/EasyRSA-3.0.4/ /etc/openvpn/easy-rsa/ chown -R root:root /etc/openvpn/easy-rsa/ rm -rf ~/EasyRSA-3.0.4.tgz cd /etc/openvpn/easy-rsa/ # Create the PKI, set up the CA, the DH params and the server + client certificates ./easyrsa init-pki ./easyrsa --batch build-ca nopass ./easyrsa gen-dh ./easyrsa build-server-full server nopass ./easyrsa build-client-full $CLIENT nopass EASYRSA_CRL_DAYS=3650 ./easyrsa gen-crl # Move the stuff we need cp pki/ca.crt pki/private/ca.key pki/dh.pem pki/issued/server.crt pki/private/server.key pki/crl.pem /etc/openvpn # CRL is read with each client connection, when OpenVPN is dropped to nobody chown nobody:$GROUPNAME /etc/openvpn/crl.pem # Generate key for tls-auth openvpn --genkey --secret /etc/openvpn/ta.key # Generate server.conf echo "port $PORT proto $PROTOCOL dev tun sndbuf 0 rcvbuf 0 ca ca.crt cert server.crt key server.key dh dh.pem auth SHA512 tls-auth ta.key 0 topology subnet server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt" > /etc/openvpn/server.conf echo "push \"route $VPCSUBNET $VPCMASK vpn_gateway\"" >> /etc/openvpn/server.conf echo "keepalive 10 120 cipher AES-256-CBC comp-lzo user nobody group $GROUPNAME persist-key duplicate-cn max-clients 10 persist-tun status openvpn-status.log verb 3 crl-verify crl.pem" >> /etc/openvpn/server.conf # Enable net.ipv4.ip_forward for the system sed -i '/\<net.ipv4.ip_forward\>/c\net.ipv4.ip_forward=1' /etc/sysctl.conf if ! grep -q "\<net.ipv4.ip_forward\>" /etc/sysctl.conf; then echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf fi # Avoid an unneeded reboot echo 1 > /proc/sys/net/ipv4/ip_forward if pgrep firewalld; then # Using both permanent and not permanent rules to avoid a firewalld # reload. # We don't use --add-service=openvpn because that would only work with # the default port and protocol. firewall-cmd --zone=public --add-port=$PORT/$PROTOCOL firewall-cmd --zone=trusted --add-source=10.8.0.0/24 firewall-cmd --permanent --zone=public --add-port=$PORT/$PROTOCOL firewall-cmd --permanent --zone=trusted --add-source=10.8.0.0/24 # Set NAT for the VPN subnet firewall-cmd --direct --add-rule ipv4 nat POSTROUTING 0 -s 10.8.0.0/24 ! -d 10.8.0.0/24 -j SNAT --to $IP firewall-cmd --permanent --direct --add-rule ipv4 nat POSTROUTING 0 -s 10.8.0.0/24 ! -d 10.8.0.0/24 -j SNAT --to $IP else # Needed to use rc.local with some systemd distros if [[ "$OS" = 'debian' && ! -e $RCLOCAL ]]; then echo '#!/bin/sh -e exit 0' > $RCLOCAL fi chmod +x $RCLOCAL # Set NAT for the VPN subnet iptables -t nat -A POSTROUTING -s 10.8.0.0/24 ! -d 10.8.0.0/24 -j SNAT --to $IP sed -i "1 a\iptables -t nat -A POSTROUTING -s 10.8.0.0/24 ! -d 10.8.0.0/24 -j SNAT --to $IP" $RCLOCAL if iptables -L -n | grep -qE '^(REJECT|DROP)'; then # If iptables has at least one REJECT rule, we asume this is needed. # Not the best approach but I can't think of other and this shouldn't # cause problems. iptables -I INPUT -p $PROTOCOL --dport $PORT -j ACCEPT iptables -I FORWARD -s 10.8.0.0/24 -j ACCEPT iptables -I FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT sed -i "1 a\iptables -I INPUT -p $PROTOCOL --dport $PORT -j ACCEPT" $RCLOCAL sed -i "1 a\iptables -I FORWARD -s 10.8.0.0/24 -j ACCEPT" $RCLOCAL sed -i "1 a\iptables -I FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT" $RCLOCAL fi fi # If SELinux is enabled and a custom port or TCP was selected, we need this if hash sestatus 2>/dev/null; then if sestatus | grep "Current mode" | grep -qs "enforcing"; then if [[ "$PORT" != '1194' || "$PROTOCOL" = 'tcp' ]]; then # semanage isn't available in CentOS 6 by default if ! hash semanage 2>/dev/null; then yum install policycoreutils-python -y fi semanage port -a -t openvpn_port_t -p $PROTOCOL $PORT fi fi fi # And finally, restart OpenVPN if [[ "$OS" = 'debian' ]]; then # Little hack to check for systemd if pgrep systemd-journal; then systemctl restart openvpn@server.service else /etc/init.d/openvpn restart fi else if pgrep systemd-journal; then systemctl restart openvpn@server.service systemctl enable openvpn@server.service else service openvpn restart chkconfig openvpn on fi fi # Try to detect a NATed connection and ask about it to potential LowEndSpirit users EXTERNALIP=$(wget -4qO- "http://whatismyip.akamai.com/") if [[ "$IP" != "$EXTERNALIP" ]]; then echo "" echo "Looks like your server is behind a NAT!" echo "" echo "If your server is NATed (e.g. LowEndSpirit), I need to know the external IP" echo "If that's not the case, just ignore this and leave the next field blank" read -p "External IP: " -e -i $EXTERNALIP USEREXTERNALIP if [[ "$USEREXTERNALIP" != "" ]]; then IP=$USEREXTERNALIP fi fi # client-common.txt is created so we have a template to add further users later echo "client dev tun proto $PROTOCOL sndbuf 0 rcvbuf 0 remote $IP $PORT resolv-retry infinite nobind max-clients 10 persist-key persist-tun remote-cert-tls server auth SHA512 cipher AES-256-CBC comp-lzo #setenv opt block-outside-dns key-direction 1 auth-nocache verb 3" > /etc/openvpn/client-common.txt # Generates the custom client.ovpn newclient "$CLIENT" echo "" echo "Finished!" echo "" echo "Your client configuration is available at" ~/"$CLIENT.ovpn" echo "If you want to add more clients, you simply need to run this script again!" fi
2023年05月03日
723 阅读
0 评论
0 点赞
2023-05-02
SpringBoot整合阿里云短信发送
0.业务场景短信发送验证码实现注册,登录...1.开通阿里云短信服务去到阿里云官方网址:https://www.aliyun.com/ 选择短信服务,在这里能获取到我们需要的4个参数,分别是accessKeyId、accessKeySecret、短信签名、模板code。2.整合进SpringBoot-方法一(推荐方法二)2.1导入依赖<dependency> <groupId>com.aliyun</groupId> <artifactId>aliyun-java-sdk-core</artifactId> <version>4.3.3</version> </dependency>2.2 封装成工具类或者服务类package top.inat.shop.utils; import com.aliyuncs.CommonRequest; import com.aliyuncs.CommonResponse; import com.aliyuncs.DefaultAcsClient; import com.aliyuncs.IAcsClient; import com.aliyuncs.http.MethodType; import com.aliyuncs.profile.DefaultProfile; import org.springframework.stereotype.Component; import java.util.Random; /** * @program: server * @ClassName AliMessageUtil * @description: 验证码工具类 * @author: jupiter * @create: 2023-04-23 10:17 * @Version 1.0 **/ @Component public class AliMessageUtil { /** * 需要配置的参数 */ // 阿里云的id和秘钥 从个人中心进行创建 private static final String accessKeyId="XXXXXXXX"; private static final String secret="XXXXXXXX"; //申请的阿里云的签名名称 private static final String SignName="smile佳"; //申请的阿里云的短信模板code private static final String TemplateCode = "SMS_147439706"; /** * 生成6位数字验证码函数 */ public static String generateVerifiCode() { int n = 6; StringBuilder code = new StringBuilder(); Random ran = new Random(); for (int i = 0; i < n; i++) { code.append(Integer.valueOf(ran.nextInt(10)).toString()); } return code.toString(); } /** * 通过阿里云短信发送验证码 * @param code 验证码 * @param phone 手机号 * @return */ public static boolean sendMsmVerifyCode(String phone,String code) { //default 地域节点,默认就好 后面是 阿里云的 id和秘钥 DefaultProfile profile = DefaultProfile.getProfile("default", accessKeyId, secret); IAcsClient client = new DefaultAcsClient(profile); // 组装请求对象 SendSmsRequest request = new SendSmsRequest(); request.putQueryParameter("PhoneNumbers", phone); request.putQueryParameter("SignName",SignName); request.putQueryParameter("TemplateCode", TemplateCode); request.putQueryParameter("TemplateParam", "{\"code\":\"" + code + "\"}"); try { CommonResponse response = client.getCommonResponse(request); System.out.println(response.getData()); return response.getHttpResponse().isSuccess(); } catch (Exception e) { e.printStackTrace(); } return false; } }进行测试 @Test void aliMessageTest(){ String code = AliMessageUtil.generateVerifiCode(); System.out.println("生成的验证码为:"+code); String phone = "18673918533"; boolean sendRes = AliMessageUtil.sendMsmVerifyCode(phone,code); System.out.println("短信发送结果:"+sendRes); }2.3 运行结果生成的验证码为:196573 {"Message":"OK","RequestId":"97D16831-6EB8-5300-AF5F-25EC86638C26","Code":"OK","BizId":"405312082956966726^0"} 短信发送结果:true2.整合进SpringBoot-方法二2.1导入依赖<dependency> <groupId>com.aliyun</groupId> <artifactId>dysmsapi20170525</artifactId> <version>2.0.9</version> </dependency> <!-- fastjson 打印详细的发送返回的结果用的,只看发送成功失败的话可以去掉 --> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> <version>1.2.35</version> </dependency>2.2 封装成工具类或者服务类package top.inat.shop.utils; import com.alibaba.fastjson.JSON; import com.aliyun.dysmsapi20170525.Client; import com.aliyun.dysmsapi20170525.models.SendSmsRequest; import com.aliyun.dysmsapi20170525.models.SendSmsResponse; import com.aliyun.dysmsapi20170525.models.SendSmsResponseBody; import com.aliyun.teaopenapi.models.Config; import org.springframework.stereotype.Component; import java.util.Random; /** * @program: server * @ClassName AliMessageUtil * @description: 验证码工具类 * @author: jupiter * @create: 2023-04-23 10:17 * @Version 1.0 **/ @Component public class AliMessageUtil { /** * 需要配置的参数 */ private static final String accessKeyId="LTAI5t7Lg3SECa8JSvyYrhoj";//这里修改为个人中心生成的AccessKey ID private static final String accessKeySecret="AXeyeLFKUU8MkgUSnTj2qTLqnZv2rL";//这里修改为个人中心生成的AccessKey Secret private static final String SignName="smile佳"; //申请的阿里云的签名名称 private static final String TemplateCode = "SMS_147439706"; ////申请的阿里云的短信模板code /** * 生成6位数字验证码 */ public static String generateVerifiCode() { int n = 6; StringBuilder code = new StringBuilder(); Random ran = new Random(); for (int i = 0; i < n; i++) { code.append(Integer.valueOf(ran.nextInt(10)).toString()); } return code.toString(); } /** * 通过阿里云短信发送验证码 * @param code 验证码 * @param phone 手机号 * @return */ public static boolean sendMsmVerifyCode(String phone,String code) throws Exception { Config config = new Config().setAccessKeyId(accessKeyId).setAccessKeySecret(accessKeySecret).setEndpoint( "dysmsapi.aliyuncs.com"); Client client = new Client(config); SendSmsRequest request = new SendSmsRequest(); request.setPhoneNumbers(phone); request.setSignName(SignName); request.setTemplateCode(TemplateCode); request.setTemplateParam("{\"code\":\"" + code + "\"}"); SendSmsResponse response = client.sendSms(request); SendSmsResponseBody body = response.getBody(); System.out.println(JSON.toJSONString(body));//不用fastjson打印结果就注释掉这一行 if("OK".equals(body.getCode())){ return true; } return false; } } 进行测试 @Test void aliMessageTest(){ String code = AliMessageUtil.generateVerifiCode(); System.out.println("生成的验证码为:"+code); String phone = "18673918533"; boolean sendRes = AliMessageUtil.sendMsmVerifyCode(phone,code); System.out.println("短信发送结果:"+sendRes); }2.3 运行结果生成的验证码为:196573 {"Message":"OK","RequestId":"97D16831-6EB8-5300-AF5F-25EC86638C26","Code":"OK","BizId":"405312082956966726^0"} 短信发送结果:true参考资料SpringBoot整合阿里云短信服务详细过程(保证初学者也能实现)SpringBoot集成阿里云短信服务发送短信阿里云——Java实现手机短信验证码功能
2023年05月02日
364 阅读
0 评论
0 点赞
2023-05-01
[微信小程序开发Bug]:redirectTo:fail can not redirectTo a tabbar page
1.应用场景最近在做一个小程序开发,用户登录功能登录后使用了wx.redirectTo()跳转回用户主页属于tabar的,使用调试器的时候跳转正常,用真机调试的发生了redirectTo:fail can not redirectTo a tabbar page的报错。查询后发现属于tabbar的页面,只能通过wx.switchTab来跳转。 wx.redirectTo({ url: '/pages/usercenter/index/index', })2.问题解决wx.switchTab({ url: '/pages/usercenter/index/index', success: function (e) { var page = getCurrentPages().pop(); if (page == undefined || page == null){ return; } page.onLoad(); } })备注:直接使用wx.switchTab跳转到已经创建的页面不会导致页面刷新,对于发生了用户信息修改的如修改个人信息后的跳转回原页面需要重新加载用户信息,可以在跳转参数里加入跳转成功后对页面的数据进行刷新。3.几种跳转方式比较wx.redirectTo:关闭当前所在页面,再跳转到指定的非TabBar页面。不受页面层数限制。wx.navigateTo:不关闭当前所在页面,跳转到指定的非TabBar页面,注意页面路径限制是五层。左上角会显示一个返回按钮,可以直接返回到上一层页面。wx.switchTab:只可以跳到属于tabBar的页面。参考资料微信小程序 报错:{“errMsg“:“redirectTo:fail can not redirectTo a tabbar page“}及路由跳转总结解决微信小程序switchTab后tab不刷新
2023年05月01日
558 阅读
0 评论
0 点赞
2023-03-28
[视频目标检测]:使用MEGA|DFF|FGFA训练自己的数据集
1.创建环境创建虚拟环境conda create --name MEGA -y python=3.7 source activate MEGA安装基础包conda install ipython pip pip install ninja yacs cython matplotlib tqdm opencv-python scipy export INSTALL_DIR=$PWD安装pytorch在安装pytorch的时候,原作者是这样的:conda install pytorch=1.3.0 torchvision cudatoolkit=10.0 -c pytorch但实际上使用cuda11.0+pytorch1.7也可以编译跑通,所以在这一步我们将其替换成:conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=11.0 -c pytorch然后就是作者使用到的coco数据集和cityperson数据集的安装:cd $INSTALL_DIR git clone https://github.com/cocodataset/cocoapi.git cd cocoapi/PythonAPId python setup.py build_ext install cd $INSTALL_DIR git clone https://github.com/mcordts/cityscapesScripts.git cd cityscapesScripts/ python setup.py build_ext install安装apex:(可省略) (建议省略,没省略运行报错)git clone https://github.com/NVIDIA/apex.git cd apex python setup.py install --cuda_ext --cpp_ext如果使用的是cuda11.0+pytorch1.7这里会报错deform_conv_cuda.cu(155): error: identifier "AT_CHECK " is undefined解决:在mega_core/csrc/cuda/deform_conv_cuda.cu 和 mega_core/csrc/cuda/deform_pool_cuda.cu文件的开头加上如下代码:#ifndef AT_CHECK #define AT_CHECK TORCH_CHECK #endif实际上原作者并没有使用到apex来进行混合精度训练,这一步也可省略,若省略的话在代码中需要修改几处地方:首先是mega_core/engine/trainer.py中的开头导入apex包注释掉,108-109行改为:losses.backward()还有tools/train_net.py中33行-36行注释掉# try: # from apex import amp # except ImportError: # raise ImportError('Use APEX for multi-precision via apex.amp') 50行也注释掉:#model, optimizer = amp.initialize(model, optimizer, opt_level=amp_opt_level)还有mega_core/layers/nms.py,注释掉第5行第8行改为:nms = _C.nms还有mega_core/layers/roi_align.py注释掉第10、57行还有mega_core/layers/roi_pool.py注释掉第10、56行这样应该就可以了。2.下载和初始化mega.pytorch# install PyTorch Detection cd $INSTALL_DIR git clone https://github.com/Scalsol/mega.pytorch.git cd mega.pytorch # the following will install the lib with # symbolic links, so that you can modify # the files if you want and won't need to # re-build it python setup.py build develop pip install 'pillow<7.0.0'3.制作自己的数据集参考作者提供的customize.md文件3.1 数据集格式参考:https://github.com/Scalsol/mega.pytorch/blob/master/CUSTOMIZE.md【注意事项】1.图片编号是从0开始的6位数字;(不想实现自己的数据加载器这是必要的)2.annotation内的xml文件与train、val钟文件一一对应。datasets ├── vid_custom | |── train | | |── video_snippet_1 | | | |── 000000.JPEG | | | |── 000001.JPEG | | | |── 000002.JPEG | | | ... | | |── video_snippet_2 | | | |── 000000.JPEG | | | |── 000001.JPEG | | | |── 000002.JPEG | | | ... | | ... | |── val | | |── video_snippet_1 | | | |── 000000.JPEG | | | |── 000001.JPEG | | | |── 000002.JPEG | | | ... | | |── video_snippet_2 | | | |── 000000.JPEG | | | |── 000001.JPEG | | | |── 000002.JPEG | | | ... | | ... | |── annotation | | |── train | | | |── video_snippet_1 | | | | |── 000000.xml | | | | |── 000001.xml | | | | |── 000002.xml | | | | ... | | | |── video_snippet_2 | | | | |── 000000.xml | | | | |── 000001.xml | | | | |── 000002.xml | | | | ... | | ... | | |── val | | | |── video_snippet_1 | | | | |── 000000.xml | | | | |── 000001.xml | | | | |── 000002.xml | | | | ... | | | |── video_snippet_2 | | | | |── 000000.xml | | | | |── 000001.xml | | | | |── 000002.xml | | | | ... | | ...3.2 准备自己txt文件具体参考源MEGA代码中datasets\ILSVRC2015\ImageSets提供的文档。格式:每一行4列依次代表:video folder, no meaning(just ignore it),frame number,video length;训练集VID_train.txt 对应vid_custom/train文件夹train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00000000 1 10 300 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00000000 1 30 300 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00000000 1 50 300 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00000000 1 70 300 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00000000 1 90 300 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00000000 1 110 300 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00000000 1 130 300 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00000000 1 150 300 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00000000 1 170 300 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00000000 1 190 300 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00000000 1 210 300 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00000000 1 230 300 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00000000 1 250 300 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00000000 1 270 300 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00000000 1 290 300 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00001000 1 1 48 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00001000 1 4 48 train/ILSVRC2015_VID_train_0000/ILSVRC2015_train_00001000 1 8 48 ···验证集VID_val.txt 对应vid_custom/val文件夹val/ILSVRC2015_val_00000000 1 0 464 val/ILSVRC2015_val_00000000 2 1 464 val/ILSVRC2015_val_00000000 3 2 464 val/ILSVRC2015_val_00000000 4 3 464 val/ILSVRC2015_val_00000000 5 4 464 val/ILSVRC2015_val_00000000 6 5 464 val/ILSVRC2015_val_00000000 7 6 464 val/ILSVRC2015_val_00000000 8 7 464 val/ILSVRC2015_val_00000000 9 8 464 val/ILSVRC2015_val_00000000 10 9 464 val/ILSVRC2015_val_00000000 11 10 464 val/ILSVRC2015_val_00000000 12 11 464 val/ILSVRC2015_val_00000000 13 12 464 val/ILSVRC2015_val_00000000 14 13 464 val/ILSVRC2015_val_00000000 15 14 464 val/ILSVRC2015_val_00000000 16 15 464 ···4.参数修改mega_core/data/datasets/vid.py修改VIDDataset内classes和classes_map:# classes=['__background__',#always index0 'car'] # classes_map=['__background__',# always index0 'n02958343'] # 自己标的数据集两个都填一样的就行 classes = ['__background__', # always index 0 'BridgeVehicle', 'Person', 'FollowMe', 'Plane', 'LuggageTruck', 'RefuelingTruck', 'FoodTruck', 'Tractor'] classes_map = ['__background__', # always index 0 'BridgeVehicle', 'Person', 'FollowMe', 'Plane', 'LuggageTruck', 'RefuelingTruck', 'FoodTruck', 'Tractor']mega_core/config/paths_catalog.py修改 DatasetCatalog.DATASETS,在变量的最后加上如下内容"vid_custom_train":{ "img_dir":"vid_custom/train", "anno_path":"vid_custom/annotation", "img_index":"vid_custom/VID_train.txt" }, "vid_custom_val":{ "img_dir":"vid_custom/val", "anno_path":"vid_custom/annotation", "img_index":"vid_custom/VID_val.txt" }修改if函数下if语句,添加上vid条件if ("DET" in name) or ("VID" in name) or ("vid" in name):修改configs/BASE_RCNN_1gpu.yaml(取决于你用几张gpu训练)NUM_CLASSES: 9#(物体类别数+背景) TRAIN: ("vid_custom_train",)#记得加“,” TEST: ("vid_custom_val",)#记得加“,”修改configs/MEGA/vid_R_101_C4_MEGA_1x.yamlDATASETS: TRAIN: ("vid_custom_train",)#记得加“,” TEST: ("vid_custom_val",)#记得加“,”5.训练和测试代码5.1 开始训练python -m torch.distributed.launch \ --nproc_per_node=1 \ tools/train_net.py \ --master_port=$((RANDOM + 10000)) \ --config-file configs/BASE_RCNN_1gpu.yaml \ OUTPUT_DIR training_dir/BASE_RCNNpython -m torch.distributed.launch \ --nproc_per_node=1 \ tools/train_net.py \ --master_port=$((RANDOM + 10000)) \ --config-file configs/DFF/vid_R_50_C4_DFF_1x.yaml \ OUTPUT_DIR training_dir/vid_R_50_C4_DFF_1x5.2 开始测试python -m torch.distributed.launch \ --nproc_per_node 1 \ tools/test_net.py \ --config-file configs/BASE_RCNN_1gpu.yaml \ MODEL.WEIGHT training_dir/BASE_RCNN/model_0020000.pth python tools/test_prediction.py \ --config-file configs/BASE_RCNN_1gpu.yaml \ --prediction ./ 参考资料MEGA训练自己的数据集-dockerhttps://github.com/Scalsol/mega.pytorch/issues/63
2023年03月28日
769 阅读
1 评论
1 点赞
2023-02-21
sms-activate:好用的国外虚拟手机接码平台(付费但很便宜|ChatGPT可用)
sms-activate:好用的国外虚拟手机接码平台(付费但很便宜|ChatGPT可用)0.背景注册万ChatGPT激活需要验证手机号后才能正常使用,但是不支持中国地区只能使用国外手机号注册,可以通过接码平台sms-activate.org得到一个虚拟手机号并且接受注册码,平台费用很便宜并且支持国内的各个平台的支付方式。1.操作步骤1.1 注册一个 sms-activate 平台的账号访问接码平台:https://sms-activate.org/cn这个平台无需翻墙直接可以访问,并且支持中文。直接用邮箱注册,注册后通过邮箱验证完成注册并登陆网站。1.2 查看待接码的服务的价格在网站的左边下面一点可以看到,选择具体的项目点击查看具体价格。以chatGPT所需的openAI为例,最便宜需要30卢比。1.3 按需进行充值选择合适的支付方式即可,支持微信和支付宝。1.4 进行接码在待接码的平台注册好账号之后进入待接码的页面。从sms-activate 平台购买对应的激活接码服务,然后填入相应的号码,发送验证码即可稍后从平台收到相应的验证码。万一脸黑没收到验证码,可以在有效期内免费退一次的,我注册是立马就收到了,很顺利。
2023年02月21日
1,766 阅读
1 评论
0 点赞
2023-02-18
面试题:进程、线程及协程的区别
面试题:进程、线程及协程的区别1.概念进程: 进程是一个具有一定独立功能的程序关于某个数据集合上的一次运行活动,是系统资源分配和独立运行的最小单位;线程: 线程是进程的一个执行单元,是任务调度和系统执行的最小单位;协程: 协程是一种用户态的轻量级线程,协程的调度完全由用户控制。2.进程与线程的区别1、根本区别: 进程是操作系统资源分配和独立运行的最小单位;线程是任务调度和系统执行的最小单位。2、地址空间区别: 每个进程都有独立的地址空间,一个进程崩溃不影响其它进程;一个进程中的多个线程共享该 进程的地址空间,一个线程的非法操作会使整个进程崩溃。3、上下文切换开销区别: 每个进程有独立的代码和数据空间,进程之间上下文切换开销较大;线程组共享代码和数据空间,线程之间切换的开销较小。3.进程与线程的联系一个进程由共享空间(包括堆、代码区、数据区、进程空间和打开的文件描述符)和一个或多个线程组成,各个线程之间共享进程的内存空间,而一个标准的线程由线程ID、程序计数器PC、寄存器和栈组成。进程和线程之间的联系如下图所示:4.进程与线程的选择1、线程的创建或销毁的代价比进程小,需要频繁创建和销毁时应优先选用线程;2、线程上下文切换的速度比进程快,需要大量计算时优先选用线程;3、线程在CPU上的使用效率更高,需要多核分布时优先选用线程,需要多机分布时优先选用进程4、线程的安全性、稳定性没有进程好,需要更稳定安全时优先使用进程。综上,线程创建和销毁的代价低、上下文切换速度快、对系统资源占用小、对CPU的使用效率高,因此一般情况下优先选择线程进行高并发编程;但线程组的所有线程共用一个进程的内存空间,安全稳定性相对较差,若其中一个线程发生崩溃,可能会使整个进程,因此对安全稳定性要求较高时,需要优先选择进程进行高并发编程。5.协程协程拥有自己的寄存器上下文和栈。协程调度切换时,将寄存器上下文和栈保存到其他地方,在切回来的时候,恢复先前保存的寄存器上下文和栈。因此,协程能保留上一次调用时的状态(即所有局部状态的一个特定组合),每次过程重入时,就相当于进入上一次调用的状态。这个过程完全由程序控制,不需要内核进行调度。协程与线程的关系如下图所示:6.协程与线程的区别1、根本区别: 协程是用户态的轻量级线程,不受内核调度;线程是任务调度和系统执行的最小单位,需要内核调度。2、运行机制区别: 线程和进程是同步机制,而协程是异步机制。3、上下文切换开销区别: 线程运行状态切换及上下文切换需要内核调度,会消耗系统资源;而协程完全由程序控制,状态切换及上下文切换不需要内核参与。参考资料进程、线程及协程的区别
2023年02月18日
340 阅读
0 评论
0 点赞
2023-02-18
Spring&SpringMVC高频面试题梳理
0.题目汇总Spring的IOC和AOP机制?Spring中Autowired和Resouree关键字的区别和联系?依赖注入的方式有几种,各是什么?讲一下什么是Spring?Spring框架中都用到了哪些设计模式?Spring框架中都用到了哪些设计模式?1.Spring的IOC和AOP机制?IOC(Inversion of Control):IOC是控制反转的意思,这是一种面向对象编程的设计思想,可以帮我们维护对象与对象之间的依赖关系,降低对象之间的耦合度。简单来说,就是将原本在程序中自己手动创建对象的控制权,交由 Spring 框架来管理,Spring IOC 容器就像是一个工厂一样,当我们需要创建一个对象的时候,只需要配置好配置文件/注解即可,完全不用考虑对象是如何被创建出来的。 IOC 容器负责创建对象,将对象连接在一起,配置这些对象,并从创建中处理这些对象的整个生命周期,直到它们被完全销毁。在spring中IOC是通过DI(Dependency Injection)/依赖注入实现的。AOP(Aspect Oriented Programing)是面向切面编程思想,这种思想是对OOP的补充,它可以在OOP的基础上进一步提高编程的效率。简单来说,它可以统一解决一批组件的共性需求(如权限检查、记录日志、事务管理等)。它利用一种称为"横切"的技术,剖解开封装的对象内部,并将那些影响了多个类的公共行为封装到一个可重用模块,并将其命名为"Aspect",即切面。所谓"切面",简单说就是那些与业务无关,却为业务模块所共同调用的逻辑或责任封装起来,便于减少系统的重复代码,降低模块之间的耦合度,并有利于未来的可操作性和可维护性。2.Spring中Autowired和Resouree关键字的区别和联系?联系@Autowired和@Resource注解都是作为bean对象注入的时候使用的两者都可以声明在字段和setter方法上注意:如果声明在字段上,那么就不需要再写setter方法。但是本质上,该对象还是作为set方法的实参,通过执行set方法注入,只是省略了setter方法罢了区别@Autowired注解是Spring提供的,而@Resource注解是J2EE本身提供的@Autowired注解默认通过byType方式注入,而@Resource注解默认通过byName方式注入@Autowired注解注入的对象需要在IOC容器中存在,否则需要加上属性required=false,表示忽略当前要注入的bean,如果有直接注入,没有跳过,不会报错讲一下什么是Spring?3.依赖注入的方式有几种,各是什么?构造器注入将被依赖对象通过构造函数的参数注入给依赖对象,并且在初始化对象的时候注入。<!-- 第一种根据index参数下标设置 --> <bean id="userT" class="com.kuang.pojo.UserT"> <!-- index指构造方法 , 下标从0开始 --> <constructor-arg index="0" value="kuangshen2"/> </bean> <!-- 第二种根据参数名字设置 --> <bean id="userT" class="com.kuang.pojo.UserT"> <!-- name指参数名 --> <constructor-arg name="name" value="kuangshen2"/> </bean> <!-- 第三种根据参数类型设置 --> <bean id="userT" class="com.kuang.pojo.UserT"> <constructor-arg type="java.lang.String" value="kuangshen2"/> </bean>setter方法注入通过调用成员变量提供的setter函数将被依赖对象注入给依赖类。<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="address" class="com.hui.pojo.Address"> <property name="address" value="背景"/> </bean> <!-- 依赖注入之set注入 --> <bean id="student" class="com.hui.pojo.Student"> <!--1. 普通值注入,value--> <property name="name" value="李家辉"/> <!--2. bean注入,ref--> <property name="address" ref="address"/> <!--3. 数组注入,array-value --> <property name="books"> <array> <value>李</value> <value>家</value> <value>辉</value> </array> </property> <!--list注入,list-value --> <property name="hobbys"> <list> <value>语文</value> <value>数学</value> <value>英语</value> </list> </property> <!--Map注入,map-entry-key-value --> <property name="card"> <map> <entry key="身份证" value="123"/> <entry key="银行卡" value="456"/> </map> </property> <!--Set注入,set-value --> <property name="games"> <set> <value>IOC</value> <value>DI</value> </set> </property> <!--null注入--> <property name="wife"> <null/> </property> <!----> <property name="info"> <props> <prop key="学号">2019</prop> <prop key="username">男</prop> <prop key="password">123456</prop> </props> </property> </bean> </beans>c命名空间与p命名空间p命名空间就是对应setter注入(property);c命名空间就是对应构造方法注入(constructor-arg)。<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:c="http://www.springframework.org/schema/c" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <!--p命名空间注入,可以直接注入属性的值:property。p命名空间注入就是对应set注入的属性注入--> <!--需要导入约束 xmlns:p="http://www.springframework.org/schema/p" --> <bean id="user" class="com.hui.pojo.User" p:name="李家辉" p:age="22"/> <!--c命名空间注入,对应所有的构造器注入,constructor-arg --> <!--需要导入约束 xmlns:c="http://www.springframework.org/schema/c" --> <bean id="user1" class="com.hui.pojo.User" c:name="李家毅" c:age="22"/> </beans>autowire byName (按名称自动装配)<bean id="user" class="com.kuang.pojo.User" autowire="byName"> <property name="str" value="qinjiang"/> </bean>autowire byType (按类型自动装配)<bean id="dog" class="com.kuang.pojo.Dog"/> <bean id="cat" class="com.kuang.pojo.Cat"/> <bean id="cat2" class="com.kuang.pojo.Cat"/> <bean id="user" class="com.kuang.pojo.User" autowire="byType"> <property name="str" value="qinjiang"/> </bean>使用注解实现自动装配@Autowired+@Qualifier@Autowired是根据类型自动装配的,加上@Qualifier则可以根据byName的方式自动装配@Qualifier不能单独使用。@Resource@Resource如有指定的name属性,先按该属性进行byName方式查找装配;其次再进行默认的byName方式进行装配;如果以上都不成功,则按byType的方式自动装配。都不成功,则报异常。4.讲一下什么是Spring?Spring是一个轻量级的IoC和AOP容器框架。是为Java应用程序提供基础性服务的一套框架,目的是用于简化企业应用程序的开发,它使得开发者只需要关心业务需求。简单来说,它是一个容器框架,用来装 javabean(java对象),中间层框架(万能胶)可以起一个连接作用,可以把各种技术粘合在一起运用。主要由以下几个模块组成:Spring Core:SpringCore模块是Spring的核心容器,它实现了IOC模式,提供了Spring框架的基础功能。此模块中包含的BeanFactory类是Spring的核心类,负责JavaBean的配置与管理。它采用Factory模式实现了IOC即依赖注入。Spring AOP:Spring在它的AOP模块中提供了对面向切面编程的丰富支持,Spring AOP 模块为基于 Spring 的应用程序中的对象提供了事务管理服务。通过使用 Spring AOP,不用依赖组件,就可以将声明性事务管理集成到应用程序中,可以自定义拦截器、切点、日志等操作Spring DAO:提供了一个JDBC的抽象层和异常层次结构,消除了烦琐的JDBC编码和数据库厂商特有的错误代码解析, 用于简化JDBCSpring ORM:对现有的ORM框架的支持;Spring Context:提供框架式的Bean访问方式,以及企业级功能(JNDI、定时任务等);Spring Web:提供了基本的面向Web的综合特性,例如多方文件上传;Spring MVC:提供面向Web应用的Model-View-Controller实现。5.解释Spring支持的几种bean的作用域。类型说明singleton在Spring容器中仅存在一个实例,即Bean以单例的形式存在。prototype每次调用getBean()时,都会执行new操作,返回一个新的实例。request每次HTTP请求都会创建一个新的Bean。session同一个HTTP Session共享一个Bean,不同的HTTP Session使用不同的Bean。globalSession同一个全局的Session共享一个Bean,一般用于Portlet环境6.Spring框架中都用到了哪些设计模式?工厂模式:ApplicationContext类使用工厂模式创建Bean对象单例模式:Spring中的Bean的作用域默认就是单例Singleton的原型模式:在 spring 中用到的原型模式有: scope="prototype" ,每次获取的是通过克隆生成的新实例,对其进行修改时对原有实例对象不造成任何影响。代理模式:Spring AOP基于动态代理实现的模板模式:Spring中以Template结尾的类,比如jdbcTemplate、SqlSessionTemplate等,都是使用了模板方法模式装饰器模式(动态地给对象添加一些额外的属性或者行为 和继承相比,装饰器模式更加灵活):Spring中配置DataSource时 ,DataSource可以是不同的数据库和数据源.为了在少修改原有类的代码下动态切换不同的数据源,这时就用到了装饰器模式责任链模式:DispatcherServlet 中的 doDispatch() 方法中获取与请求匹配的处理器HandlerExecutionChain,this.getHandler() 方法的处理使用到了责任链模式。观察者模式:Spring 中的 Event 和 Listener7.什么是MVC?MVC是一种架构模式,在这种模式下软件被分为三层,即Model(模型)、View(视图)、Controller(控制器)。Model代表的是数据,View代表的是用户界面,Controller代表的是数据的处理逻辑,它是Model和View这两层的桥梁。将软件分层的好处是,可以将对象之间的耦合度降低,便于代码的维护。8.谈谈对 Spring MVC 的理解Spring MVC 是一款很优秀的 MVC 框架。可以让我们的开发更简洁,而且它和 Spring 是无缝集成,是 Spring 的一个子模块,是我们上面提到 Spring 大家族中 Web 模块。Spring MVC 框架主要由 DispatcherServlet 、处理器映射、处理器(控制器)、视图解析器、视图组成。Spring MVC框架像许多其他MVC框架一样, 以请求为驱动 , 围绕一个中心Servlet分派请求及提供其他功能,DispatcherServlet是一个实际的Servlet (它继承自HttpServlet 基类)。springMVC执行流程9.SpringMVC怎么样设定重定向和转发的?Spring MVC 请求方式分为转发、重定向 2 种,分别使用 forward 和 redirect 关键字在 controller 层进行处理。重定向是将用户从当前处理请求定向到另一个视图(例如 JSP)或处理请求,以前的请求(request)中存放的信息全部失效,并进入一个新的 request 作用域;转发是将用户对当前处理的请求转发给另一个视图或处理请求,以前的 request 中存放的信息不会失效。转发是服务器行为,重定向是客户端行为。1)转发过程客户浏览器发送 http 请求,Web 服务器接受此请求,调用内部的一个方法在容器内部完成请求处理和转发动作,将目标资源发送给客户;在这里转发的路径必须是同一个 Web 容器下的 URL,其不能转向到其他的 Web 路径上,中间传递的是自己的容器内的 request。在客户浏览器的地址栏中显示的仍然是其第一次访问的路径,也就是说客户是感觉不到服务器做了转发的。转发行为是浏览器只做了一次访问请求。2)重定向过程客户浏览器发送 http 请求,Web 服务器接受后发送 302 状态码响应及对应新的 location 给客户浏览器,客户浏览器发现是 302 响应,则自动再发送一个新的 http 请求,请求 URL 是新的 location 地址,服务器根据此请求寻找资源并发送给客户。在这里 location 可以重定向到任意 URL,既然是浏览器重新发出了请求,那么就没有什么 request 传递的概念了。在客户浏览器的地址栏中显示的是其重定向的路径,客户可以观察到地址的变化。重定向行为是浏览器做了至少两次的访问请求。在 Spring MVC 框架中,重定向与转发的示例代码如下:@RequestMapping(“/login”) public String login() { //转发到一个请求方法(同一个控制器类可以省略/index/) return “forward:/index/isLogin”; } @RequestMapping(“/isLogin”) public String isLogin() { //重定向到一个请求方法 return “redirect:/index/isRegister”; }在 Spring MVC 框架中,控制器类中处理方法的 return 语句默认就是转发实现,只不过实现的是转发到视图。@RequestMapping(“/register”) public String register() { return “register”; //转发到register.jsp }10.SpringMVC常用的注解有哪些?组件型注解:作用:被注解的类将被spring初始话为一个bean,然后统一管理。@Component 在类定义之前添加@Component注解,他会被spring容器识别,并转为bean。@Repository 对Dao实现类进行注解 (特殊的@Component)@Service 用于对业务逻辑层进行注解, (特殊的@Component)@Controller 用于控制层注解 , (特殊的@Component)==待完成==参考资料@Autowired和@Resource注解的区别和联系(十分详细,不看后悔)依赖注入三种方式设计模式_spring框架中常用的8种设计模式
2023年02月18日
610 阅读
0 评论
0 点赞
2022-12-04
代码绘制爱心
代码绘制爱心0.背景最近看到一个剧叫点燃我温暖你,剧中大一的学生用c++绘制了普通的和很炫酷的爱心,女朋友还蛮有兴趣的,今天有空了准备也来试试复现一下。2.绘制静态爱心2.0 相关知识爱心的绘制主要运用到的几何知识是笛卡尔的心形线,心形线的极坐标下的函数表达式为:$$ r=a(1-sinθ) $$其中a是一个a>0的系数,可以任意取正值,它决定心形的大小。转为直角坐标系下的函数表达式为:$$ x^2+y^2=a\sqrt{x^2+y^2} - ay $$然后经过了一通平移等操作,最后绘制用得心形线条曲线的方程为:$$ (x^2+y^2-1)^3-x^3y^3=0 $$2.1 代码#include<iostream> #include<windows.h> #include<cmath> using namespace std; int main() { float x, y, a; for (y = 1.5f; y > -1.5f; y -= 0.1f) { for (x = -1.5f; x < 1.5f; x += 0.05f) { // 逐行绘制爱心的每一行 float fx = pow(x*x+y*y-1,3)- pow(x,2)*pow(y,3); if(fx <= 0.0f) { cout<<'*'; } else { cout<<' '; } } cout<<endl; } system("pause"); return 0; }2.2 效果图3.绘制跳动的爱心3.0 原理github拷贝过来的,原理还在解读中,先码住。3.1 代码heart.pyfrom math import cos, pi import numpy as np import cv2 import os, glob class HeartSignal: def __init__(self, curve="heart", title="Love U", frame_num=20, seed_points_num=2000, seed_num=None, highlight_rate=0.3, background_img_dir="", set_bg_imgs=False, bg_img_scale=0.2, bg_weight=0.3, curve_weight=0.7, frame_width=1080, frame_height=960, scale=10.1, base_color=None, highlight_points_color_1=None, highlight_points_color_2=None, wait=100, n_star=5, m_star=2): super().__init__() self.curve = curve self.title = title self.highlight_points_color_2 = highlight_points_color_2 self.highlight_points_color_1 = highlight_points_color_1 self.highlight_rate = highlight_rate self.base_color = base_color self.n_star = n_star self.m_star = m_star self.curve_weight = curve_weight img_paths = glob.glob(background_img_dir + "/*") self.bg_imgs = [] self.set_bg_imgs = set_bg_imgs self.bg_weight = bg_weight if os.path.exists(background_img_dir) and len(img_paths) > 0 and set_bg_imgs: for img_path in img_paths: img = cv2.imread(img_path) self.bg_imgs.append(img) first_bg = self.bg_imgs[0] width = int(first_bg.shape[1] * bg_img_scale) height = int(first_bg.shape[0] * bg_img_scale) first_bg = cv2.resize(first_bg, (width, height), interpolation=cv2.INTER_AREA) # 对齐图片,自动裁切中间 new_bg_imgs = [first_bg, ] for img in self.bg_imgs[1:]: width_close = abs(first_bg.shape[1] - img.shape[1]) < abs(first_bg.shape[0] - img.shape[0]) if width_close: # resize height = int(first_bg.shape[1] / img.shape[1] * img.shape[0]) width = first_bg.shape[1] img = cv2.resize(img, (width, height), interpolation=cv2.INTER_AREA) # crop and fill if img.shape[0] > first_bg.shape[0]: crop_num = img.shape[0] - first_bg.shape[0] crop_top = crop_num // 2 crop_bottom = crop_num - crop_top img = np.delete(img, range(crop_top), axis=0) img = np.delete(img, range(img.shape[0] - crop_bottom, img.shape[0]), axis=0) elif img.shape[0] < first_bg.shape[0]: fill_num = first_bg.shape[0] - img.shape[0] fill_top = fill_num // 2 fill_bottom = fill_num - fill_top img = np.concatenate([np.zeros([fill_top, width, 3]), img, np.zeros([fill_bottom, width, 3])], axis=0) else: width = int(first_bg.shape[0] / img.shape[0] * img.shape[1]) height = first_bg.shape[0] img = cv2.resize(img, (width, height), interpolation=cv2.INTER_AREA) # crop and fill if img.shape[1] > first_bg.shape[1]: crop_num = img.shape[1] - first_bg.shape[1] crop_top = crop_num // 2 crop_bottom = crop_num - crop_top img = np.delete(img, range(crop_top), axis=1) img = np.delete(img, range(img.shape[1] - crop_bottom, img.shape[1]), axis=1) elif img.shape[1] < first_bg.shape[1]: fill_num = first_bg.shape[1] - img.shape[1] fill_top = fill_num // 2 fill_bottom = fill_num - fill_top img = np.concatenate([np.zeros([fill_top, width, 3]), img, np.zeros([fill_bottom, width, 3])], axis=1) new_bg_imgs.append(img) self.bg_imgs = new_bg_imgs assert all(img.shape[0] == first_bg.shape[0] and img.shape[1] == first_bg.shape[1] for img in self.bg_imgs), "背景图片宽和高不一致" self.frame_width = self.bg_imgs[0].shape[1] self.frame_height = self.bg_imgs[0].shape[0] else: self.frame_width = frame_width # 窗口宽度 self.frame_height = frame_height # 窗口高度 self.center_x = self.frame_width / 2 self.center_y = self.frame_height / 2 self.main_curve_width = -1 self.main_curve_height = -1 self.frame_points = [] # 每帧动态点坐标 self.frame_num = frame_num # 帧数 self.seed_num = seed_num # 伪随机种子,设置以后除光晕外粒子相对位置不动(减少内部闪烁感) self.seed_points_num = seed_points_num # 主图粒子数 self.scale = scale # 缩放比例 self.wait = wait def curve_function(self, curve): curve_dict = { "heart": self.heart_function, "butterfly": self.butterfly_function, "star": self.star_function, } return curve_dict[curve] def heart_function(self, t, frame_idx=0, scale=5.20): """ 图形方程 :param frame_idx: 帧的索引,根据帧数变换心形 :param scale: 放大比例 :param t: 参数 :return: 坐标 """ trans = 3 - (1 + self.periodic_func(frame_idx, self.frame_num)) * 0.5 # 改变心形饱满度度的参数 x = 15 * (np.sin(t) ** 3) t = np.where((pi < t) & (t < 2 * pi), 2 * pi - t, t) # 翻转x > 0部分的图形到3、4象限 y = -(14 * np.cos(t) - 4 * np.cos(2 * t) - 2 * np.cos(3 * t) - np.cos(trans * t)) ign_area = 0.15 center_ids = np.where((x > -ign_area) & (x < ign_area)) if np.random.random() > 0.32: x, y = np.delete(x, center_ids), np.delete(y, center_ids) # 删除稠密部分的扩散,为了美观 # 放大 x *= scale y *= scale # 移到画布中央 x += self.center_x y += self.center_y # 原心形方程 # x = 15 * (sin(t) ** 3) # y = -(14 * cos(t) - 4 * cos(2 * t) - 2 * cos(3 * t) - cos(3 * t)) return x.astype(int), y.astype(int) def butterfly_function(self, t, frame_idx=0, scale=5.2): """ 图形函数 :param frame_idx: :param scale: 放大比例 :param t: 参数 :return: 坐标 """ # 基础函数 # t = t * pi p = np.exp(np.sin(t)) - 2.5 * np.cos(4 * t) + np.sin(t) ** 5 x = 5 * p * np.cos(t) y = - 5 * p * np.sin(t) # 放大 x *= scale y *= scale # 移到画布中央 x += self.center_x y += self.center_y return x.astype(int), y.astype(int) def star_function(self, t, frame_idx=0, scale=5.2): n = self.n_star / self.m_star p = np.cos(pi / n) / np.cos(pi / n - (t % (2 * pi / n))) x = 15 * p * np.cos(t) y = 15 * p * np.sin(t) # 放大 x *= scale y *= scale # 移到画布中央 x += self.center_x y += self.center_y return x.astype(int), y.astype(int) def shrink(self, x, y, ratio, offset=1, p=0.5, dist_func="uniform"): """ 带随机位移的抖动 :param x: 原x :param y: 原y :param ratio: 缩放比例 :param p: :param offset: :return: 转换后的x,y坐标 """ x_ = (x - self.center_x) y_ = (y - self.center_y) force = 1 / ((x_ ** 2 + y_ ** 2) ** p + 1e-30) dx = ratio * force * x_ dy = ratio * force * y_ def d_offset(x): if dist_func == "uniform": return x + np.random.uniform(-offset, offset, size=x.shape) elif dist_func == "norm": return x + offset * np.random.normal(0, 1, size=x.shape) dx, dy = d_offset(dx), d_offset(dy) return x - dx, y - dy def scatter(self, x, y, alpha=0.75, beta=0.15): """ 随机内部扩散的坐标变换 :param alpha: 扩散因子 - 松散 :param x: 原x :param y: 原y :param beta: 扩散因子 - 距离 :return: x,y 新坐标 """ ratio_x = - beta * np.log(np.random.random(x.shape) * alpha) ratio_y = - beta * np.log(np.random.random(y.shape) * alpha) dx = ratio_x * (x - self.center_x) dy = ratio_y * (y - self.center_y) return x - dx, y - dy def periodic_func(self, x, x_num): """ 跳动周期曲线 :param p: 参数 :return: y """ # 可以尝试换其他的动态函数,达到更有力量的效果(贝塞尔?) def ori_func(t): return cos(t) func_period = 2 * pi return ori_func(x / x_num * func_period) def gen_points(self, points_num, frame_idx, shape_func): # 用周期函数计算得到一个因子,用到所有组成部件上,使得各个部分的变化周期一致 cy = self.periodic_func(frame_idx, self.frame_num) ratio = 10 * cy # 图形 period = 2 * pi * self.m_star if self.curve == "star" else 2 * pi seed_points = np.linspace(0, period, points_num) seed_x, seed_y = shape_func(seed_points, frame_idx, scale=self.scale) x, y = self.shrink(seed_x, seed_y, ratio, offset=2) curve_width, curve_height = int(x.max() - x.min()), int(y.max() - y.min()) self.main_curve_width = max(self.main_curve_width, curve_width) self.main_curve_height = max(self.main_curve_height, curve_height) point_size = np.random.choice([1, 2], x.shape, replace=True, p=[0.5, 0.5]) tag = np.ones_like(x) def delete_points(x_, y_, ign_area, ign_prop): ign_area = ign_area center_ids = np.where((x_ > self.center_x - ign_area) & (x_ < self.center_x + ign_area)) center_ids = center_ids[0] np.random.shuffle(center_ids) del_num = round(len(center_ids) * ign_prop) del_ids = center_ids[:del_num] x_, y_ = np.delete(x_, del_ids), np.delete(y_, del_ids) # 删除稠密部分的扩散,为了美观 return x_, y_ # 多层次扩散 for idx, beta in enumerate(np.linspace(0.05, 0.2, 6)): alpha = 1 - beta x_, y_ = self.scatter(seed_x, seed_y, alpha, beta) x_, y_ = self.shrink(x_, y_, ratio, offset=round(beta * 15)) x = np.concatenate((x, x_), 0) y = np.concatenate((y, y_), 0) p_size = np.random.choice([1, 2], x_.shape, replace=True, p=[0.55 + beta, 0.45 - beta]) point_size = np.concatenate((point_size, p_size), 0) tag_ = np.ones_like(x_) * 2 tag = np.concatenate((tag, tag_), 0) # 光晕 halo_ratio = int(7 + 2 * abs(cy)) # 收缩比例随周期变化 # 基础光晕 x_, y_ = shape_func(seed_points, frame_idx, scale=self.scale + 0.9) x_1, y_1 = self.shrink(x_, y_, halo_ratio, offset=18, dist_func="uniform") x_1, y_1 = delete_points(x_1, y_1, 20, 0.5) x = np.concatenate((x, x_1), 0) y = np.concatenate((y, y_1), 0) # 炸裂感光晕 halo_number = int(points_num * 0.6 + points_num * abs(cy)) # 光晕点数也周期变化 seed_points = np.random.uniform(0, 2 * pi, halo_number) x_, y_ = shape_func(seed_points, frame_idx, scale=self.scale + 0.9) x_2, y_2 = self.shrink(x_, y_, halo_ratio, offset=int(6 + 15 * abs(cy)), dist_func="norm") x_2, y_2 = delete_points(x_2, y_2, 20, 0.5) x = np.concatenate((x, x_2), 0) y = np.concatenate((y, y_2), 0) # 膨胀光晕 x_3, y_3 = shape_func(np.linspace(0, 2 * pi, int(points_num * .4)), frame_idx, scale=self.scale + 0.2) x_3, y_3 = self.shrink(x_3, y_3, ratio * 2, offset=6) x = np.concatenate((x, x_3), 0) y = np.concatenate((y, y_3), 0) halo_len = x_1.shape[0] + x_2.shape[0] + x_3.shape[0] p_size = np.random.choice([1, 2, 3], halo_len, replace=True, p=[0.7, 0.2, 0.1]) point_size = np.concatenate((point_size, p_size), 0) tag_ = np.ones(halo_len) * 2 * 3 tag = np.concatenate((tag, tag_), 0) x_y = np.around(np.stack([x, y], axis=1), 0) x, y = x_y[:, 0], x_y[:, 1] return x, y, point_size, tag def get_frames(self, shape_func): for frame_idx in range(self.frame_num): np.random.seed(self.seed_num) self.frame_points.append(self.gen_points(self.seed_points_num, frame_idx, shape_func)) frames = [] def add_points(frame, x, y, size, tag): highlight1 = np.array(self.highlight_points_color_1, dtype='uint8') highlight2 = np.array(self.highlight_points_color_2, dtype='uint8') base_col = np.array(self.base_color, dtype='uint8') x, y = x.astype(int), y.astype(int) frame[y, x] = base_col size_2 = np.int64(size == 2) frame[y, x + size_2] = base_col frame[y + size_2, x] = base_col size_3 = np.int64(size == 3) frame[y + size_3, x] = base_col frame[y - size_3, x] = base_col frame[y, x + size_3] = base_col frame[y, x - size_3] = base_col frame[y + size_3, x + size_3] = base_col frame[y - size_3, x - size_3] = base_col # frame[y - size_3, x + size_3] = color # frame[y + size_3, x - size_3] = color # 高光 random_sample = np.random.choice([1, 0], size=tag.shape, p=[self.highlight_rate, 1 - self.highlight_rate]) # tag2_size1 = np.int64((tag <= 2) & (size == 1) & (random_sample == 1)) # frame[y * tag2_size1, x * tag2_size1] = highlight2 tag2_size2 = np.int64((tag <= 2) & (size == 2) & (random_sample == 1)) frame[y * tag2_size2, x * tag2_size2] = highlight1 # frame[y * tag2_size2, (x + 1) * tag2_size2] = highlight2 # frame[(y + 1) * tag2_size2, x * tag2_size2] = highlight2 frame[(y + 1) * tag2_size2, (x + 1) * tag2_size2] = highlight2 for x, y, size, tag in self.frame_points: frame = np.zeros([self.frame_height, self.frame_width, 3], dtype="uint8") add_points(frame, x, y, size, tag) frames.append(frame) return frames def draw(self, times=10): frames = self.get_frames(self.curve_function(self.curve)) for i in range(times): for frame in frames: frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR) if len(self.bg_imgs) > 0 and self.set_bg_imgs: frame = cv2.addWeighted(self.bg_imgs[i % len(self.bg_imgs)], self.bg_weight, frame, self.curve_weight, 0) cv2.imshow(self.title, frame) cv2.waitKey(self.wait) if __name__ == '__main__': import yaml settings = yaml.load(open("./settings.yaml", "r", encoding="utf-8"), Loader=yaml.FullLoader) if settings["wait"] == -1: settings["wait"] = int(settings["period_time"] / settings["frame_num"]) del settings["period_time"] times = settings["times"] del settings["times"] heart = HeartSignal(seed_num=5201314, **settings) heart.draw(times)settings.yaml# 颜色:RGB三原色数值 0~255 # 设置高光时,尽量选择接近主色的颜色,看起来会和谐一点 # 视频里的蓝色调 #base_color: # 主色 默认玫瑰粉 # - 30 # - 100 # - 100 #highlight_points_color_1: # 高光粒子色1 默认淡紫色 # - 150 # - 120 # - 220 #highlight_points_color_2: # 高光粒子色2 默认淡粉色 # - 128 # - 140 # - 140 base_color: # 主色 默认玫瑰粉 - 228 - 100 - 100 highlight_points_color_1: # 高光粒子色1 默认淡紫色 - 180 - 87 - 200 highlight_points_color_2: # 高光粒子色2 默认淡粉色 - 228 - 140 - 140 period_time: 1000 * 2 # 周期时间,默认1.5s一个周期 times: 50 # 播放周期数,一个周期跳动1次 frame_num: 24 # 一个周期的生成帧数 wait: 60 # 每一帧停留时间, 设置太短可能造成闪屏,设置 -1 自动设置为 period_time / frame_num seed_points_num: 2000 # 构成主图的种子粒子数,总粒子数是这个的8倍左右(包括散点和光晕) highlight_rate: 0.2 # 高光粒子的比例 frame_width: 720 # 窗口宽度,单位像素,设置背景图片后失效 frame_height: 640 # 窗口高度,单位像素,设置背景图片后失效 scale: 9.1 # 主图缩放比例 curve: "heart" # 图案类型:heart, butterfly, star n_star: 7 # n-角型/星,如果curve设置成star才会生效,五角星:n-star:5, m-star:2 m_star: 3 # curve设置成star才会生效,n-角形 m-star都是1,n-角星 m-star大于1,比如 七角星:n-star:7, m-star:2 或 3 title: "Love Li Xun" # 仅支持字母,中文乱码 background_img_dir: "src/center_imgs" # 这个目录放置背景图片,建议像素在400 X 400以上,否则可能报错,如果图片实在小,可以调整上面scale把爱心缩小 set_bg_imgs: false # true或false,设置false用默认黑背景 bg_img_scale: 0.6 # 0 - 1,背景图片缩放比例 bg_weight: 0.4 # 0 - 1,背景图片权重,可看做透明度吧 curve_weight: 1 # 同上 # ======================== 推荐参数: 直接复制数值替换上面对应参数 ================================== # 蝴蝶,报错很可能是蝴蝶缩放大小超出窗口宽和高 # curve: "butterfly" # frame_width: 800 # frame_height: 720 # scale: 60 # base_color: [100, 100, 228] # highlight_points_color_1: [180, 87, 200] # highlight_points_color_2: [228, 140, 140]3.2 运行效果参考资料爱心函数https://github.com/131250208/FunnyToys/blob/main/heart.py
2022年12月04日
472 阅读
0 评论
0 点赞
1
...
5
6
7
...
24