您的位置:首页 > 文旅 > 美景 > 生成视频 zeroscope_v2_576w 学习笔记

生成视频 zeroscope_v2_576w 学习笔记

2024/10/12 1:25:06 来源:https://blog.csdn.net/jacke121/article/details/139871353  浏览:    关键词:生成视频 zeroscope_v2_576w 学习笔记

目录

生成视频代码:

维度报错:

解决方法,修改代码:


已开源:

视频生成模型 Zeroscope开源 免费无水印

 

视频生成模型 Zeroscope_v2_576w 开源 - 腾讯云开发者社区-腾讯云

生成视频代码:

import torch
from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
from diffusers.utils import export_to_video
import os
# os.environ['HTTP_PROXY'] = 'http://127.0.0.1:7890'os.environ["HF_TOKEN"] = "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"
# os.environ['HTTPS_PROXY'] = 'https://127.0.0.1:7890'# pipe = DiffusionPipeline.from_pretrained(r"D:\360安全浏览器下载", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16,use_auth_token=os.environ["HF_TOKEN"])
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()prompt = "Darth Vader is surfing on waves"
video_frames = pipe(prompt, num_inference_steps=40, height=320, width=576, num_frames=24).frames
video_path = export_to_video(video_frames)
print(video_path)

维度报错:

Traceback (most recent call last):File "E:\project\jijia\aaa.py", line 18, in <module>video_path = export_to_video(video_frames)File "D:\ProgramData\miniconda3\envs\pysd\lib\site-packages\diffusers\utils\export_utils.py", line 135, in export_to_videoh, w, c = video_frames[0].shape
ValueError: too many values to unpack (expected 3)

解决方法,修改代码:

def export_to_video(video_frames: Union[List[np.ndarray], List[PIL.Image.Image]], output_video_path: str = None, fps: int = 10
) -> str:if is_opencv_available():import cv2else:raise ImportError(BACKENDS_MAPPING["opencv"][1].format("export_to_video"))if output_video_path is None:output_video_path = tempfile.NamedTemporaryFile(suffix=".mp4").name# Convert PIL images to numpy arrays if neededif isinstance(video_frames[0], PIL.Image.Image):video_frames = [np.array(frame) for frame in video_frames]# Ensure the frames are in the correct formatif isinstance(video_frames[0], np.ndarray):# Check if frames are 4-dimensional and handle accordinglyif len(video_frames[0].shape) == 4:video_frames = [frame[0] for frame in video_frames]# Convert frames to uint8video_frames = [(frame * 255).astype(np.uint8) for frame in video_frames]# Ensure all frames are in (height, width, channels) formath, w, c = video_frames[0].shapefourcc = cv2.VideoWriter_fourcc(*"mp4v")video_writer = cv2.VideoWriter(output_video_path, fourcc, fps=fps, frameSize=(w, h))for frame in video_frames:img = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)video_writer.write(img)video_writer.release()return output_video_pathdef export_to_video_o(video_frames: Union[List[np.ndarray], List[PIL.Image.Image]], output_video_path: str = None, fps: int = 10
) -> str:if is_opencv_available():import cv2else:raise ImportError(BACKENDS_MAPPING["opencv"][1].format("export_to_video"))if output_video_path is None:output_video_path = tempfile.NamedTemporaryFile(suffix=".mp4").nameif isinstance(video_frames[0], np.ndarray):video_frames = [(frame * 255).astype(np.uint8) for frame in video_frames]elif isinstance(video_frames[0], PIL.Image.Image):video_frames = [np.array(frame) for frame in video_frames]fourcc = cv2.VideoWriter_fourcc(*"mp4v")h, w, c = video_frames[0].shapevideo_writer = cv2.VideoWriter(output_video_path, fourcc, fps=fps, frameSize=(w, h))for i in range(len(video_frames)):img = cv2.cvtColor(video_frames[i], cv2.COLOR_RGB2BGR)video_writer.write(img)return output_video_path

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com