您的位置:首页 > 汽车 > 新车 > 成都企业网站建设模板_上海房产交易网站_网站维护需要多长时间_外链互换平台

成都企业网站建设模板_上海房产交易网站_网站维护需要多长时间_外链互换平台

2024/12/27 20:24:04 来源:https://blog.csdn.net/m0_56182552/article/details/144661044  浏览:    关键词:成都企业网站建设模板_上海房产交易网站_网站维护需要多长时间_外链互换平台
成都企业网站建设模板_上海房产交易网站_网站维护需要多长时间_外链互换平台

一、介绍:项目中应用GroundingDINO模型,使得保存如yolo格式的labels,并且可以用labelimg查看

GroundingDINO源码地址:https://github.com/IDEA-Research/GroundingDINO

下载预训练权重:https://github.com/IDEA-Research/GroundingDINO/releases/download/v0.1.0-alpha/groundingdino_swint_ogc.pth复制这个网址在浏览器,就下载好了,放进weights文件夹里

1、安装好虚拟环境,进入虚拟环境

2、在Windows系统终端输入

$env:PYTHONPATH="E:\project\MODEL\Grounding_facebookrearch_SAM\GroundingDINO-main"; $env:CUDA_VISIBLE_DEVICES="0"; python demo/inference_on_a_image.py `
-c groundingdino/config/GroundingDINO_SwinT_OGC.py `
-p weights/groundingdino_swint_ogc.pth `
-i datasets/test/11/images/01.png `
-o "datasets/test/11/results" `
-t "apple"

如果是Ubuntu系统,输入

CUDA_VISIBLE_DEVICES=0 python demo/inference_on_a_image.py \
-c groundingdino/config/GroundingDINO_SwinT_OGC.py \
-p weights/groundingdino_swint_ogc.pth \
-i datasets/test/1_1/images/01.png \
-o "datasets/test/1_1/output" \
-t "peach"

3、也可用脚本(测试一张图)

from groundingdino.util.inference import load_model, load_image, predict, annotate
import cv2model = load_model("groundingdino/config/GroundingDINO_SwinT_OGC.py", "weights/groundingdino_swint_ogc.pth")
IMAGE_PATH = "weights/dog-3.jpeg"
TEXT_PROMPT = "chair . person . dog ."
BOX_TRESHOLD = 0.35
TEXT_TRESHOLD = 0.25image_source, image = load_image(IMAGE_PATH)boxes, logits, phrases = predict(model=model,image=image,caption=TEXT_PROMPT,box_threshold=BOX_TRESHOLD,text_threshold=TEXT_TRESHOLD
)annotated_frame = annotate(image_source=image_source, boxes=boxes, logits=logits, phrases=phrases)
cv2.imwrite("annotated_image.jpg", annotated_frame)

4、测试多张图,并且保存labels(如yolo的labels的txt文件一样),可用labelimg查看

#发现预测的boxes, logits, phrases = predict得到的boxes格式就是boxes = [[center_x, center_y, width, height], [center_x, center_y, width, height],...]
# 跟yolo的格式一致,所以根本不需要将[center_x, center_y, width, height]转成xyxy,然后再归一化成[center_x, center_y, width, height],直接把boxes, logits, phrases = predict得到的boxes写进txt文件就行了,可以用labelimg打开查看。批量测试图片,并保存结果的txt文件,
import os, torch, time
import cv2
import glob
from groundingdino.util.inference import load_model, load_image, predict, annotate# 设置模型路径和配置
model = load_model("groundingdino/config/GroundingDINO_SwinT_OGC.py", "weights/groundingdino_swint_ogc.pth")# 设置输入图像文件夹和输出文件夹
input_folder = "datasets/test/1_2/images"  # 输入文件夹
output_folder = "datasets/test/1_2/1output"  # 输出文件夹
labels_folder = "datasets/test/1_2/1labels"  # 用于保存标签的文件夹# 创建输出文件夹和标签文件夹(如果不存在)
os.makedirs(output_folder, exist_ok=True)
os.makedirs(labels_folder, exist_ok=True)# 设置文本提示、框阈值和文本阈值
TEXT_PROMPT = "apple . peach . dog ."  # 可根据需要调整
BOX_TRESHOLD = 0.35
TEXT_TRESHOLD = 0.25# 解析 TEXT_PROMPT 为类别列表
categories = [category.strip() for category in TEXT_PROMPT.split(".") if category.strip()]# 获取输入文件夹中的所有图像文件
image_paths = glob.glob(os.path.join(input_folder, "*.png"))  # 假设图像格式是.png,你可以根据需要修改#开始计时
start_time = time.time()# 遍历每个图像文件进行推理和保存结果
for IMAGE_PATH in image_paths:# 加载图像image_source, image = load_image(IMAGE_PATH)# 进行预测boxes, logits, phrases = predict(model=model,image=image,caption=TEXT_PROMPT,box_threshold=BOX_TRESHOLD,text_threshold=TEXT_TRESHOLD)# 标注图像annotated_frame = annotate(image_source=image_source, boxes=boxes, logits=logits, phrases=phrases)# 获取输出图像路径output_image_path = os.path.join(output_folder, os.path.basename(IMAGE_PATH))# 保存结果图像   # 这里可以暂时注释掉或将保存图片的代码放到计时外cv2.imwrite(output_image_path, annotated_frame)# 保存预测的框为 YOLO 格式标签label_file_path = os.path.join(labels_folder, os.path.splitext(os.path.basename(IMAGE_PATH))[0] + ".txt")# 仅计算推理的时间,不计算保存文件的时间# 你可以将文件保存部分移到推理外部,或将其注释掉with open(label_file_path, 'w') as label_file:for box, phrase in zip(boxes, phrases): # 假设 boxes 是相对坐标# 提取框的坐标 [xmin, ymin, xmax, ymax]x_center, y_center, width, height  = box  #boxes 中的坐标值在预测模型输出中是相对坐标(范围通常为 [0, 1]),而不是像素级坐标(基于图像宽度和高度)。因此,直接使用这些相对坐标来归一化时会导致结果异常,因为这些值已经是相对于图像尺寸的。print(f"x_center:",x_center,f"y_center:",y_center,f"width:",width,f"height:",height)  #boxes 的坐标值是归一化相对于图像尺寸的比例值,范围通常是 [0, 1]。例如:xmin: tensor(0.5369), xmax: tensor(0.0622)表示 xmin 和 xmax 分别是图像宽度的 53.69% 和 6.22%。print(f"Normalized coordinates: x_center={x_center:.6f}, y_center={y_center:.6f}, width={width:.6f}, height={height:.6f}")# 根据 phrase 确定 class_idif phrase in categories:class_id = categories.index(phrase)else:class_id = -1  # 未知类别if class_id >= 0:# 写入标签文件,按 YOLO 格式:class_id x_center y_center width heightlabel_file.write(f"{class_id} {x_center:.6f} {y_center:.6f} {width:.6f} {height:.6f}\n")print(f"Processed and saved: {output_image_path} and {label_file_path}")print(f"Processed: {IMAGE_PATH}")# 停止计时
end_time = time.time()# 计算总耗时并输出处理的图像数量和耗时。(不包括保存图像的时间)
total_time = end_time - start_time
print(f"Processed {len(image_paths)} images in {total_time:.2f} seconds.")
# 计算每秒处理的帧数(FPS)
fps = len(image_paths) / total_time
print(f"Processing speed: {fps:.2f} frames per second.")  #每秒多少帧"""xmin, xmax, ymin, ymax 会先转为像素级坐标。归一化操作会基于像素级坐标,确保数值合理。YOLO 标签会与实际图像尺寸对应。"""

测试保存的结果图和txt文件如下:

labelimg查看的图如下:

二、介绍:项目中应用GroundingDINO模型和SAM模型,先通过GroundingDINO模型得到框,使框作为SAM的提示信息,最终得到该物体的掩码,并且保存如yolo格式的labels可以用labelimg查看和可用labelme查看的json文件

GroundingDINO源码地址:https://github.com/IDEA-Research/GroundingDINO

Segment anything model源码地址是:GitHub - facebookresearch/segment-anything: The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.

项目下载之后,环境配好(注意下载模型)

我把模型都移到同一个文件夹下了

1、实现测试一张图,测试结果没有保存

# -*- coding: utf-8 -*-
# '''
# @date: 12/18/2023
#
# @author: laygin
#
# '''
#只能测试一张图,而且测试结果不能保存下来
import cv2, sys, os
import numpy as np
import torch
import matplotlib.pyplot as plt
from torchvision.ops import box_convert
from groundingdino.util.inference import load_model, load_image, predict, annotate
# from segment_anything import sam_model_registry, SamPredictor
# 获取 `segment-anything-main` 的绝对路径,因为segment_anything在E:\project\MODEL\Grounding_facebookrearch_SAM\segment-anything-main,不在GroundingDINO-mian项目下
segment_anything_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../segment-anything-main"))# 将 `segment-anything-main` 路径添加到 Python 模块搜索路径
sys.path.append(segment_anything_path)# 导入模块
from segment_anything import sam_model_registry, SamPredictordevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')# cfg_path = r'..\DIR_VLP\GroundingDINO\groundingdino\config\GroundingDINO_SwinT_OGC.py'
# weight_path = r'..\DIR_VLP\GroundingDINO\weights\groundingdino_swint_ogc.pth'
cfg_path = r'groundingdino\config\GroundingDINO_SwinT_OGC.py'
weight_path = r'weights\groundingdino_swint_ogc.pth'
gdino_model = load_model(cfg_path, weight_path)
BOX_TRESHOLD = 0.35
TEXT_TRESHOLD = 0.25sam_checkpoint = r"weights\sam_vit_h_4b8939.pth"
model_type = "vit_h"
sam = sam_model_registry[model_type](checkpoint=sam_checkpoint)
sam.to(device=device)def get_box_from_gdino_with_text_prompt(ip, text_prompt):image_source, image = load_image(ip)h, w = image_source.shape[:2]boxes, logits, phrases = predict(model=gdino_model,image=image,caption=text_prompt,box_threshold=BOX_TRESHOLD,text_threshold=TEXT_TRESHOLD,device="cpu")annotated_frame = annotate(image_source=image_source, boxes=boxes, logits=logits, phrases=phrases)plot_img(annotated_frame, name=f'gdino_res({text_prompt})')boxes = boxes * torch.Tensor([w, h, w, h])xyxy = box_convert(boxes=boxes, in_fmt="cxcywh", out_fmt="xyxy").numpy()return image_source, xyxydef seg_with_sam_with_box_prompt(image, boxes):predictor = SamPredictor(sam)predictor.set_image(image)for box in boxes:input_box = box.astype(int)print(input_box)masks, _, _ = predictor.predict(point_coords=None,point_labels=None,box=input_box[None, :],multimask_output=False,)plt.figure(figsize=(8, 8))plt.imshow(image)show_mask(masks[0], plt.gca())show_box(input_box, plt.gca())plt.axis('off')plt.show()def plot_img(img, name='img', x=40, y=30):cv2.namedWindow(name)cv2.moveWindow(name, x, y)cv2.imshow(name, img)if cv2.waitKey(0) == ord('q'):cv2.destroyAllWindows()def show_box(box, ax):x0, y0 = box[0], box[1]w, h = box[2] - box[0], box[3] - box[1]ax.add_patch(plt.Rectangle((x0, y0), w, h, edgecolor='green', facecolor=(0, 0, 0, 0), lw=2))def show_mask(mask, ax, random_color=False):if random_color:color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0)else:color = np.array([30 / 255, 144 / 255, 255 / 255, 0.6])h, w = mask.shape[-2:]mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1)ax.imshow(mask_image)if __name__ == '__main__':# ip = r"..\DIR_VLP\GroundingDINO\demo\inputs\dog2duck1.png"# text = 'duck'ip = r"datasets\test\2_1\images\02_realsense_image_20241023_160220_239.png"# text = 'peach'text = 'peach . apple'image, boxes = get_box_from_gdino_with_text_prompt(ip, text)print(boxes, len(boxes))seg_with_sam_with_box_prompt(image, boxes)

2、测试多张图,并且保存测试结果(目标检测的结果图和txt文件及语义分割的结果图和json文件)

"""对多张图像进行测试,使用 OpenCV 保存 get_box_from_gdino_with_text_prompt 的结果保存到 save_detect 文件夹、seg_with_sam_with_box_prompt 的结果保存到 save_seg 文件夹,
save_detect_txt以类似 YOLO 的格式保存检测结果(class cx cy w h)。用labelimg查看
save_seg_json以格式保存分割结果labelme,包括检测到的掩模的轮廓。"""# -*- coding: utf-8 -*-
import cv2, json
import os
import sys
import numpy as np
import torch
import matplotlib.pyplot as plt
from torchvision.ops import box_convert
from groundingdino.util.inference import load_model, load_image, predict, annotate# 动态添加 `segment-anything-main` 到模块路径
segment_anything_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "../segment-anything-main"))
sys.path.append(segment_anything_path)from segment_anything import sam_model_registry, SamPredictordevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')cfg_path = r'groundingdino\config\GroundingDINO_SwinT_OGC.py'
weight_path = r'weights\groundingdino_swint_ogc.pth'
gdino_model = load_model(cfg_path, weight_path)BOX_TRESHOLD = 0.35
TEXT_TRESHOLD = 0.25sam_checkpoint = r"weights\sam_vit_h_4b8939.pth"
model_type = "vit_h"
sam = sam_model_registry[model_type](checkpoint=sam_checkpoint)
sam.to(device=device)def ensure_dir(path):"""Ensure directory exists."""if not os.path.exists(path):os.makedirs(path)def get_box_from_gdino_with_text_prompt(ip, text_prompt, save_path):"""获取 GroundingDINO 的检测框并保存结果图像"""image_source, image = load_image(ip)h, w = image_source.shape[:2]boxes, logits, phrases = predict(model=gdino_model,image=image,caption=text_prompt,box_threshold=BOX_TRESHOLD,text_threshold=TEXT_TRESHOLD,device=device,)annotated_frame = annotate(image_source=image_source, boxes=boxes, logits=logits, phrases=phrases)save_img_path = os.path.join(save_path, os.path.basename(ip))cv2.imwrite(save_img_path, annotated_frame)  # 保存检测结果print(f"Saved detection result: {save_img_path}")boxes = boxes * torch.Tensor([w, h, w, h])xyxy = box_convert(boxes=boxes, in_fmt="cxcywh", out_fmt="xyxy").numpy()return image_source, xyxy, phrases, logits   #返回 image_source, xyxy, phrases 和 logits   之前只返回return image_source, xyxydef seg_with_sam_with_box_prompt(image, boxes, phrases, logits,  save_path, file_name):  # def seg_with_sam_with_box_prompt(image, boxes, save_path, file_name):"""使用 SAM 对检测框进行分割并保存结果图像,同时显示识别的类别phrases和置信度logits"""predictor = SamPredictor(sam)predictor.set_image(image)# plt.figure(figsize=(8, 8))  #matplotlib 默认会将图像的尺寸调整为画布大小(例如 800x800),而不是保留原始图像的尺寸。figsize=(8, 8) 指定了画布大小为 8x8 英寸,并且默认保存的 DPI(每英寸像素数)为 100,这导致保存到save_seg文件夹里的结果图的图像尺寸为8x100=800px(800x800) 。# plt.imshow(image)
#调整 matplotlib 保存图像的方式,使其与原始图像分辨率一致。创建 Matplotlib 图像,与原图尺寸匹配h, w, _ = image.shapedpi = 100  # 每英寸像素数fig, ax = plt.subplots(figsize=(w / dpi, h / dpi), dpi=dpi)ax.imshow(image)maskss = []  # 存储所有掩码labels = []  # 存储所有标签for box, phrase, logit in zip(boxes, phrases, logits):    #for box in boxes:  zip(boxes, phrases, logits): 通过 zip() 函数将 boxes(边界框)、phrases(类别)和 logits(置信度)组合在一起,确保每个框都有对应的类别和置信度。input_box = box.astype(int)print(f"Processing box: {input_box}")# 获取预测的掩码masks, _, _ = predictor.predict(point_coords=None,point_labels=None,box=input_box[None, :],multimask_output=False,)# 直接将当前掩码(二维数组)添加到 masks 列表maskss.append(masks[0])  # current_masks[0] 是二维掩码# Concatenate the mask to the list of masks# 将掩码添加到列表中labels.append(phrase)  # 添加标签到列表中# 确保mask是二维的并将其添加到masks列表中# masks.append(mask[0])  # # 假设 'mask' 是一个 numpy 数组,且 'mask[0]' 是有效元素 # mask[0] 是二值掩码图像(假设predictor.predict()返回的是一个batch)# masks = np.append(masks, mask[0], axis=0)  # 这会将 'mask[0]' 添加到 'masks' 数组中,如果 masks 已经是 NumPy 数组,并且你希望继续使用 NumPy 的 append,可以使用 np.append:# 显示分割掩码和检测框# show_mask(masks[0], plt.gca())# show_box(input_box, plt.gca())show_mask(masks[0], ax)show_box(input_box, ax)# 在框上方显示类别和置信度label = f"{phrase} {logit:.2f}"  # 这行代码将类别和置信度格式化为字符串,例如 'apple 0.75',其中 .2f 保留两位小数。ax.text(input_box[0], input_box[1] - 10, label, color='white', fontsize=10, ha='left', va='top',   #input_box[0] 和 input_box[1] 分别是矩形框左上角的坐标。bbox=dict(facecolor='black', alpha=0.6, edgecolor='none', boxstyle='round,pad=0.2'))   #ax.text(): 在每个框的上方添加文本,文本内容为类别和置信度(例如 "apple 0.75")。文本的颜色设置为 white,并使用 bbox 给文本加上一个黑色半透明背景,以增强可读性。plt.axis('off')plt.subplots_adjust(left=0, right=1, top=1, bottom=0)  # 去除多余边距save_img_path = os.path.join(save_path, f"{file_name}_seg.png")fig.savefig(save_img_path, dpi=dpi, bbox_inches='tight', pad_inches=0)  # 保留原始分辨率# plt.savefig(save_img_path)  # 保存分割结果plt.close(fig)print(f"Saved segmentation result: {save_img_path}")return maskss, labelsdef save_detect_txt(image_file, boxes, phrases, detect_save_path):"""保存检测框的YOLO格式的标签文件"""#获取图片尺寸image_source, image = load_image(image_file)h, w = image_source.shape[:2]#生成保存的txt文件路径txt_filename = os.path.splitext(os.path.basename(image_file))[0] + ".txt"txt_folder = os.path.join(detect_save_path, "labels")os.makedirs(txt_folder, exist_ok=True)  # 自动创建文件夹(如果不存在)txt_filepath = os.path.join(txt_folder,txt_filename)# 根据输入的 text 建立类别到 class_id 的映射categories = [t.strip() for t in text.split('.')]  # 分割并去掉多余空格category_to_id = {category: idx for idx, category in enumerate(categories)}with open(txt_filepath, "w") as f:#遍历每个检测框for box, phrase in zip(boxes, phrases):# 获取 phrase 对应的 class_idif phrase in category_to_id:class_id = category_to_id[phrase]else:print(f"Warning: Phrase '{phrase}' not found in text '{text}'")continue  # 跳过未匹配的类别#计算框的中心坐标和宽高,并归一化x_min, y_min, x_max, y_max = boxx_center = (x_min + x_max) / 2 / wy_center = (y_min + y_max) / 2 / hwidth = (x_max - x_min) / wheight = (y_max - y_min) / h# 写入 YOLO 格式的标签(class_id x_center y_center width height)f.write(f"{class_id} {x_center:.6f} {y_center:.6f} {width:.6f} {height:.6f}\n")print(f"Saved detection labels to: {txt_filepath}")def save_seg_json(image_file, maskss, labels, img_width, img_height, json_dir):"""保存分割结果为包含轮廓的 JSON 文件。参数:- image_file: 图像文件名- masks: 分割掩码的列表- phrases: 对应每个掩码的类别标签- img_width: 图像的宽度- img_height: 图像的高度- json_dir: 保存 JSON 文件的目录"""# 创建 LabelMe 风格的 JSON 数据shapes = []for i, mask in enumerate(maskss):mask = mask.squeeze()  # 转换为 NumPy 数组并去掉多余的维度points = np.argwhere(mask == 255)  # 获取掩码为 255 的点作为多边形的坐标if len(points) > 0:shape = {'label': labels[i],  # 使用 label_map 获取物体的标签'points': points.tolist(),  # 转换为列表格式'group_id': 1,'shape_type': 'polygon','flags': {}}shapes.append(shape)# 保存 JSON 文件json_data = {'version': '5.5.0','flags': {},'shapes': shapes,'imagePath': image_file + file_name_1  , #因为image_file +file_name_1里的第一个image_file是file_name是01,还不清楚为什么传给save_seg_json的是去除掉图像后缀的,但是保存下来用labelme查看的json文件里file_name是需要完整的图像名称的,所以加上file_name_1# 'imagePath': image_file ,'imageData': None,'imageHeight': img_height,'imageWidth': img_width}# 遍历每个掩码,将掩码转换为多边形点   确保掩码是单通道的 uint8 格式for idx, (mask, label) in enumerate(zip(maskss, labels)):mask = mask.squeeze()  # 转换为 NumPy 格式并确保是 uint8 类型mask = (mask > 0).astype(np.uint8) * 255  # 将掩码转换为二值格式(0 和 255)contours, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)  # 提取轮廓# 遍历每个轮廓,将其添加到 JSON 数据for contour in contours:points = contour.squeeze().tolist()  # 将轮廓点转换为列表格式if len(points) < 3:  # 确保多边形至少有 3 个点continuejson_data["shapes"].append({"label": label,"points": points,"group_id": idx,"shape_type": "polygon","flags": {}})# 确保保存目录存在if not os.path.exists(json_dir):os.makedirs(json_dir)  # 创建目录(如果不存在)# 检查 json_data 内容print(json_data)  # 调试输出 json_data# 保存 JSON 文件json_output_path = os.path.join(json_dir, os.path.splitext(image_file)[0] + ".json")with open(json_output_path, 'w') as json_file:json.dump(json_data, json_file, indent=4)print(f"Saved JSON with shapes for {json_output_path}.")def show_box(box, ax):"""显示检测框"""x0, y0 = box[0], box[1]w, h = box[2] - box[0], box[3] - box[1]ax.add_patch(plt.Rectangle((x0, y0), w, h, edgecolor='green', facecolor=(0, 0, 0, 0), lw=2))def show_mask(mask, ax, random_color=False):"""显示分割掩码"""if random_color:color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0)else:color = np.array([30 / 255, 144 / 255, 255 / 255, 0.6])h, w = mask.shape[-2:]mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1)ax.imshow(mask_image)if __name__ == '__main__':input_path = r"datasets\test\2_2\images"text = 'peach . apple'# text = 'peach'# 输出保存目录detect_save_path = "datasets/test/2_2/save_detect"seg_save_path = "datasets/test/2_2/save_seg"json_save_path = "datasets/test/2_2/save_json"ensure_dir(detect_save_path)ensure_dir(seg_save_path)ensure_dir(json_save_path)# 处理单张或多张图片if os.path.isdir(input_path):image_files = [os.path.join(input_path, f) for f in os.listdir(input_path) if f.endswith(('.png', '.jpg', '.jpeg'))]else:image_files = [input_path]for image_file in image_files:file_name = os.path.splitext(os.path.basename(image_file))[0]file_name_1 = os.path.splitext(os.path.basename(image_file))[1]print(f"file_name是:",file_name)print(f"image_file 是:",image_file)print(f"file_name_1是:",file_name_1)# 获取检测框并保存检测结果image, boxes, phrases, logits = get_box_from_gdino_with_text_prompt(image_file, text, detect_save_path)  #image, boxes = get_box_from_gdino_with_text_prompt(image_file, text, detect_save_path)# 使用 SAM 进行分割并保存分割结果maskss, labels = seg_with_sam_with_box_prompt(image, boxes, phrases, logits, seg_save_path, file_name)   #seg_with_sam_with_box_prompt(image, boxes, seg_save_path, file_name)# 保存 YOLO 格式的标签save_detect_txt(image_file, boxes, phrases, detect_save_path)# 保存分割结果到 JSON 文件save_seg_json(file_name, maskss, labels, image.shape[1], image.shape[0], json_save_path)

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com