当前位置: 首页> 健康> 知识 > 武汉静默5天_莱芜网站优化平台_如何做到精准客户推广_软文广告有哪些

武汉静默5天_莱芜网站优化平台_如何做到精准客户推广_软文广告有哪些

时间:2025/9/21 12:15:28来源:https://blog.csdn.net/qq_53123067/article/details/146213904 浏览次数:0次
武汉静默5天_莱芜网站优化平台_如何做到精准客户推广_软文广告有哪些

本篇文章将深入探讨如何在React应用中优化文件上传操作提高上传速度,本文将逐步解析常见的优化技巧、最佳实践以及工具使用,帮助你构建更高效、更流畅的前端文件处理系统。

目录

上传文件

并行上传

切片上传

切片上传(worker实现)

断点续传

上传文件

在做项目的过程中你肯定会遇到过文件上传的操作,当我们想上传文件的时候可以使用原生HTML的<input type="file">元素让用户选择文件,该元素files属性会返回一个FileList对象,这个FileList对象类似于数组,包含了用户所选的一个或多个文件,FileList中的每个文件都是一个File对象,它继承自Blob对象,代码如下所示:

const index = () => {const fileChange = (e: any) => {console.log("file", e.target.files[0])}return (<input type="file" name='file' onChange={(e) => fileChange(e)} />)
}
export default index

前端上传文件主要分为以下两种方式:

二进制blob传输:一种用于在前端创建表单数据的对象能够模拟HTML表单的数据结构,借助FormData可以把文件以二进制形式添加到表单数据里,之后通过HTTP请求将其发送到服务器

这种方式本质上是把文件当作二进制数据直接传输,服务器端能够像处理常规表单文件上传一样处理这些数据,直接以二进制形式传输文件不会增加额外的编码开销所以传输效率较高,尤其是对于大文件上传,但服务器端配置不当的话也会导致无法正常接收文件

File对象继承自Blob对象,这意味着File对象拥有Blob对象的所有属性和方法,因此任何可以使用Blob对象的地方理论上也可以使用File对象,如下所示:

const index = () => {const fileChange = (e: any) => {let file = e.target.files[0]const blob = new Blob([file], { type: 'text/plain' }) // 创建一个Blob对象,用于读取文件内容const file_blob = new File([blob], '学习任务.txt') // 将Blob对象封装成File对象,以便后续操作console.log("file", file)console.log("blob", blob)console.log("file_blob", file_blob)}return (<input type="file" name='file' onChange={(e) => fileChange(e)} />)
}
export default index

接下来我们就可以直接借助FormData可以把文件以二进制形式添加到表单数据里,如下所示:

const index = () => {const [file, setFile] = useState<string>('')const fileChange = (e: any) => {let file = e.target.files[0]setFile(file)}// 提交文件const submit = () => {let formData = new FormData()formData.append('user', '这是我的') // 添加其他需要上传的数据formData.append('file', file) // 将文件添加到FormData对象中axios.post('xxx', formData)}return (<><input type="file" name='file' onChange={(e) => fileChange(e)} /><button onClick={() => submit()}>提交</button></>)
}
export default index

可以看到外面随便调用的接口,传参已经是外面想要的二进制数据,并且也是传递了user数据:

base64传输:一种用64个可打印字符来表示二进制数据的编码方式,前端使用FileReader对象把文件读取为二进制数据再将其转换为Base64编码的字符串,之后通过HTTP请求把这个Base64字符串发送到服务器,服务器端再将其解码还原成原始文件

可以将Base64字符串嵌入到JSON数据中方便与一些仅支持文本数据传输的接口进行交互,但Base64编码会使数据量增加约1/3,这会导致传输时间变长,尤其对于大文件会显著增加传输成本,如下我们可以通过files对象转成blob对象,然后切换部分数据,然后在转成files对象,然后再转成base64,可以看到最终的base64只是部分的数据:

const index = () => {const [imgBase64, setImgBase64] = useState<string>('')const fileChange = (e: any) => {let file = e.target.files[0]let blob = new Blob([file]).slice(0, 15000) // 创建一个Blob对象,用于读取文件内容let sliceFile = new File([blob], file.name) // 创建一个新的File对象,用于上传let fr = new FileReader() // 创建一个FileReader对象fr.readAsDataURL(sliceFile) // 读取文件内容fr.onload = function () {setImgBase64(fr.result as string) // 设置图片的base64编码}}return (<><input type="file" name='file' onChange={(e) => fileChange(e)} /><img style={{ width: '200px' }} src={imgBase64} alt="" /></>)
}
export default index

由于我们截取了部分的base64的数据,所以我们图片展示也是部分的数据,这样我们可以做一个类似缩略图或者文本预览的效果:

最终它们的一个转换关系如下图所示:

接下来我们就项目中实际用到的上传文件场景进行一个简单的案例讲解,如下所示:

并行上传

并行上传也就是我们常说的多文件上传,在原生js中我们只需要给上传文本框加一个multiple属性就可以实现多文件上传了,如下所示:

const Index = () => {// 使用 useState 来管理文件列表状态const [fileList, setFileList] = useState<any>([]);const fileChange = (e: any) => {const files = e.target.files;if (files.length > 0) {// 将新选择的文件添加到现有的文件列表中setFileList((prevFileList: any) => [...prevFileList, ...files]);}};return (<><input type="file" name='file' onChange={(e) => fileChange(e)} multiple /><button onClick={() => console.log("All files:", fileList)}>查看所有上传文件</button></>);
};export default Index;

我们只需要在上传文件的时候按住ctrl快捷键即可实现文件多选,不论用户是上传单文件还是多文件,我们最终都会得到用户上传的所有文件数据,如下所:

当我们想并行调用上传的接口但是时候,直接一个循环操作即可,取出上传的每个文件并设置成formData对象形成二进制数据然后传递给后端:

const Index = () => {// 使用 useState 来管理文件列表状态const [fileList, setFileList] = useState<any>([]);const fileChange = (e: any) => {const files = e.target.files;if (files.length > 0) {// 将新选择的文件添加到现有的文件列表中setFileList((prevFileList: any) => [...prevFileList, ...files]);}};const submit = () => {fileList.forEach((file: any) => {const formData = new FormData();formData.append('files', file);axios.post('/api/upload', formData)});}return (<><input type="file" name='file' onChange={(e) => fileChange(e)} multiple /><button onClick={() => submit()}>提交</button></>);
};export default Index;

效果如下所示:

切片上传

切片上传的本质就是将大文件分割成小块(分片)分别上传到服务器,然后在服务器端重新组装成完整的文件,这种方法可以减少单次上传的负载同时更容易处理网络中断或失败的情况,其核心思路主要有以下几点:

1)前端调用接口:查看之前是否上传过当前文件的片段,用来得出是要上传当前文件的全部片段还是部分片段,如果上传文件小于切片大小则直接调用上传即可即可

2)前端将文件进行切片,并根据实际要上传的片段来请求接口

3)若请求都顺利完成,前端调用后端的合并切片的接口

4)后端进行校验后,进行大文件的合并

给出如下完整的代码:

import axios from 'axios';
import { useState } from 'react';const FileUpload = () => {const [selectedFiles, setSelectedFiles] = useState<File[]>([]);const [uploadProgress, setUploadProgress] = useState<number>(0);const chunkSize = 1024 * 1024 * 5; // 5MB 一片段// 01 文件切片上传逻辑const uploadChunk = async (chunk: Blob, index: number, fileName: string, chunkCount: number) => {const formData = new FormData();formData.append('file', chunk, `${fileName}.part${index}`);formData.append('chunkIndex', index.toString());formData.append('chunkCount', chunkCount.toString());return axios.post('https://localhost:7189/api/File/Upload', formData, {headers: { 'Accept': 'application/json' },onUploadProgress: (e: any) => {const { loaded, total } = e;updateProgress(fileName, loaded / total);}});};// 02 获取已上传片段信息逻辑const getUploadedChunks = async (fileName: string) => {const res = await axios.get(`https://localhost:7189/api/File/UploadedChunks?fileName=${fileName}`);return res.data.uploadedChunks;};// 03 单个文件上传以及合并逻辑const uploadSingleFile = async (file: File) => {const chunkCount = Math.ceil(file.size / chunkSize);if (file.size < chunkSize) {const formData = new FormData();formData.append('file', file);return axios.post('https://localhost:7189/api/File/Upload', formData, {headers: { 'Accept': 'application/json' },onUploadProgress: (e: any) => {const { loaded, total } = e;updateProgress(file.name, loaded / total);}});} else {const uploaded = new Set(await getUploadedChunks(file.name));const promises = [];for (let i = 0; i < chunkCount; i++) {if (uploaded.has(`${file.name}.part${i}`)) continue;const start = i * chunkSize;const end = Math.min(start + chunkSize, file.size);const chunk = file.slice(start, end);promises.push(uploadChunk(chunk, i, file.name, chunkCount));}await Promise.all(promises);return fetch('https://localhost:7189/api/File/Merge', {method: 'POST',headers: { 'Content-Type': 'application/json' },body: JSON.stringify({ fileName: file.name, chunkCount })});}};// 04 更新整体上传进度逻辑const updateProgress = (fileName: string, fileProgress: number) => {const totalSize = selectedFiles.reduce((acc, file) => acc + file.size, 0);let uploadedSize = 0;selectedFiles.forEach((file) => {if (file.name === fileName) {uploadedSize += file.size * fileProgress;} else {// 这里假设其他文件进度为 100% 或者可以记录在一个对象里uploadedSize += file.size;}});const overallProgress = uploadedSize / totalSize;setUploadProgress(overallProgress);};// 05 触发上传逻辑const handleUpload = async () => {if (selectedFiles.length === 0) return;setUploadProgress(0);const promises = selectedFiles.map(uploadSingleFile);try {await Promise.all(promises);console.log('所有文件上传完成');} catch (error) {console.log('部分文件上传失败');}};// 06 文件选择逻辑const handleFileChange = (e: any) => {setSelectedFiles(Array.from(e.target.files));};return (<div><input type="file" multiple onChange={handleFileChange} /><button onClick={handleUpload}>Upload</button>{uploadProgress ? (<div>上传进度: {(uploadProgress * 100).toFixed(2)}%</div>) : null}</div>);
};export default FileUpload;    

这里面我加了一点多文件上传的逻辑在里面,可能不太完善但是基本上也是实现了,达到的效果如下所示:

这里我给一下后端的实现逻辑及其具体代码,感兴趣的朋友可以看一下,这里我用的是.net core实现的接口,代码定义了上传、合并文件的存储目录及切片大小,上传接口能处理单个文件或切片文件上传,依据切片情况决定是否插入数据库;可查询指定文件名的已上传切片;合并接口将切片文件合并并插入数据库;还提供根据文件ID获取预览信息和下载文件的接口,若文件存在则返回对应信息或文件内容,不存在则返回404:

using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using SqlSugar;
using webapi_study.Models;
using webapi_study.responsity;namespace webapi_study.Controllers;[ApiController]
[Route("api/[controller]/[action]")]
public class FileController : ControllerBase
{private readonly ISqlSugarClient reponsitory;private const string UploadsDirectory = "uploads";private const string MergedDirectory = "merged";private const long ChunkSize = 1024 * 1024 * 5; // 5MB 一片段// 构造函数注入 SqlSugar 实例public FileController(ISqlSugarClient reponsitory){this.reponsitory = reponsitory;}// 上传文件接口[HttpPost]public async Task<IActionResult> Upload(IFormFile file, [FromForm] int chunkIndex = 1, [FromForm] int chunkCount = 1){if (file == null || file.Length == 0){return BadRequest("No file uploaded.");}if (!Directory.Exists(UploadsDirectory)){Directory.CreateDirectory(UploadsDirectory);}var filePath = Path.Combine(UploadsDirectory, file.FileName);using (var stream = new FileStream(filePath, FileMode.Create)){await file.CopyToAsync(stream);}// 判断是否为最后一个切片或者文件小于切片大小且不是切片上传if (chunkCount == 1 || (chunkIndex + 1 < chunkCount && file.Length < ChunkSize)){// 获取文件大小var fileSize = file.Length;// 生成文件预览地址,这里假设使用相对路径,你可以根据实际情况修改string webUrl = $"/{UploadsDirectory}/{file.FileName}";// 创建 FileUpload 模型实例var fileUpload = new FileUpload{Id = Guid.NewGuid(),Name = file.FileName,Size = fileSize,WebUrl = webUrl,UploadTime = DateTime.Now};// 插入到数据库var insertResult = reponsitory.Insertable(fileUpload).ExecuteCommand();if (insertResult > 0){return Ok(new { message = "文件上传并插入数据库成功", filePath });}else{return StatusCode(500, new { message = "Failed to insert file information into database." });}}return Ok(new { message = "Chunk uploaded successfully." });}// 查询已经上传的文件切片[HttpGet]public IActionResult UploadedChunks([FromQuery] string fileName){if (!Directory.Exists(UploadsDirectory)){return Ok(new { uploadedChunks = new List<string>() });}var files = Directory.GetFiles(UploadsDirectory);var uploadedChunks = files.Where(file => Path.GetFileName(file).StartsWith(fileName)).ToList();return Ok(new { uploadedChunks });}// 对上传的文件进行合并[HttpPost]public IActionResult Merge([FromBody] MergeRequest request){if (!Directory.Exists(MergedDirectory)){Directory.CreateDirectory(MergedDirectory);}var filePath = Path.Combine(MergedDirectory, request.FileName);using (var writeStream = new FileStream(filePath, FileMode.Create)){try{for (int i = 0; i < request.ChunkCount; i++){var chunkPath = Path.Combine(UploadsDirectory, $"{request.FileName}.part{i}");if (!System.IO.File.Exists(chunkPath)){return StatusCode(500, new { message = $"Failed to merge file.", error = $"Chunk {i} is missing" });}var data = System.IO.File.ReadAllBytes(chunkPath);writeStream.Write(data, 0, data.Length);System.IO.File.Delete(chunkPath);}writeStream.Close();// 获取文件大小var fileInfo = new FileInfo(filePath);long fileSize = fileInfo.Length;// 生成文件预览地址,这里假设使用相对路径,你可以根据实际情况修改string webUrl = $"/{MergedDirectory}/{request.FileName}";// 创建 FileUpload 模型实例var fileUpload = new FileUpload{Id = Guid.NewGuid(),Name = request.FileName,Size = fileSize,WebUrl = webUrl,UploadTime = DateTime.Now};// 插入到数据库var insertResult = reponsitory.Insertable(fileUpload).ExecuteCommand();if (insertResult > 0){return Ok(new { message = "File merged successfully and inserted into database.", filePath });}else{return StatusCode(500, new { message = "Failed to insert file information into database." });}}catch (Exception ex){return StatusCode(500, new { message = "Failed to merge file.", error = ex.Message });}}}// 根据文件id获取预览信息[HttpGet]public IActionResult GetFilePreview(Guid id){var file = reponsitory.Queryable<FileUpload>().InSingle(id);if (file == null){return NotFound();}return Ok(new { webUrl = file.WebUrl });}// 根据文件id下载文件[HttpGet]public IActionResult DownloadFile(Guid id){var file = reponsitory.Queryable<FileUpload>().InSingle(id);if (file == null){return NotFound();}string filePath = Path.Combine(MergedDirectory, file.Name);byte[] bytes = System.IO.File.ReadAllBytes(filePath);return File(bytes, "application/octet-stream", file.Name);}
}public class MergeRequest
{public string FileName { get; set; }public int ChunkCount { get; set; }
}

切片上传(worker实现)

切片上传除了循环调用接口以外,我们还可以借助web woker进行多线程调用上传文件,不了解web worker的可以参考我之前的文章:地址 ,这里我们创建对上传文件切片的主线程:

// 定义切片上传的大小
const chunkSize = 1024 * 1024 * 1 // 例如,每次上传1MB
const threadCount = navigator.hardwareConcurrency || 4 // 获取系统核心数// 创建切片上传的函数
export const cutFile = (file: File, uploadedChunks: any) => {return new Promise((resolve) => {const choukCount = Math.ceil(file.size / chunkSize) // 计算切片数量const threadChunkCount = Math.ceil(choukCount / threadCount) // 计算每个线程需要处理的切片数量const result: any = [] // 存储上传结果的数组let doneThreadNum = 0 // 已完成的线程数量// 创建线程并分配切片for (let index = 0; index < threadCount; index++) {const worker = new Worker('/src/components/compressedfile/断点续传/worker/worker.ts', { type: 'module' }) // 创建Worker线程let start = index * threadChunkCount // 计算每个线程的起始位置let end = (index + 1) * threadChunkCount // 计算每个线程的结束位置if (end > choukCount)  end = choukCount // 确保最后一个线程的结束位置不超过总切片数量worker.onerror = (err: any) => console.log('workder error:', err) // 处理错误// 向worker发送消息,开始上传任务worker.postMessage({ file, chunkSize, start, end, uploadedChunks })// 接收worker的消息,处理上传结果worker.onmessage = (e: any) => {e.data.forEach((item: any) => {result[item.chunkIndex] = item // 将上传结果存储到数组中})doneThreadNum++ // 增加已完成的线程数量worker.terminate() // 终止线程if (doneThreadNum === threadCount) resolve(result) // 如果所有切片都上传完成,则解析结果}}})
}

然后在woker线程中,对数据进行处理,然后把处理的结果返回给主线程:

import SparkMD5 from 'spark-md5'// 接收主线程发送的消息,并处理数据
onmessage = async (e) => {const { file, chunkSize, start, end, uploadedChunks } = e.data;const result = []for (let index = start; index < end; index++) {if (uploadedChunks.includes(index)) {result.push({chunkIndex: index,isUploaded: true})continue}result.push(createChunk(file, index, chunkSize))}const chunks = await Promise.all(result)postMessage(chunks)
}// 创建文件块
const createChunk = (file: any, index: any, chunkSize: any) => {return new Promise((resolve) => {const start = index * chunkSize // 起始位置const end = start + chunkSize // 结束位置const fileReader = new FileReader() // 创建FileReader对象const spark = new SparkMD5.ArrayBuffer() // 创建SparkMD5对象const blob = file.slice(start, end) // 截取文件块fileReader.onload = (e: any) => {spark.append(e.target.result) // 将文件块内容追加到SparkMD5对象中resolve({chunkStart: start, /// 起始位置chunkEnd: end, // 结束位置chunkIndex: index, // 当前块的索引chunkHash: spark.end(), // 当前块的哈希值chunkBlob: blob, // 当前块的内容,用于上传到服务器isUploaded: false, // 当前块是否已经上传到服务器,默认为false})}fileReader.readAsArrayBuffer(blob) // 读取文件块内容为ArrayBuffer对象})}

主线程处理好结果之后,在tsx文件调用主线程中的函数即可:

import axios from "axios"
import { cutFile } from "./worker"
import { useState } from "react";const index = () => {const [selectedFiles, setSelectedFiles] = useState<File[]>([]);// 01 前端上传文件const handleFileChange = (e: any) => setSelectedFiles(Array.from(e.target.files));// 02 获取已上传片段信息逻辑const getUploadedChunks = async (fileName: string) => {const res: any = await axios.get(`https://localhost:7189/api/Worker/GetUploadedChunks?fileName=${fileName}`);return res.data.uploadedChunks;};// 03 后端解析文件并切片const handleUpload = async () => {if (selectedFiles.length === 0) alert('请选择文件');for (let index = 0; index < selectedFiles.length; index++) {const file = selectedFiles[index]const uploadedChunks = await getUploadedChunks(file.name) // 获取已上传片段信息逻辑// const uploadedChunks = [1, 2, 3, 4]const chunks: any = await cutFile(file, uploadedChunks)let uploadedNumber = 0for (let index = 0; index < chunks.length; index++) {const { chunkIndex, chunkHash, chunkBlob, isUploaded } = chunks[index] // 获取切片信息逻辑// 判断是否已上传片段逻辑处理if (isUploaded) {uploadedNumber++// 判断是否所有切片都已上传逻辑处理if (uploadedNumber === chunks.length) {mergeChunks(file.name)continue;}}const formData = new FormData()formData.append('fileName', file.name)formData.append('chunkBlob', chunkBlob)formData.append('chunkHash', chunkHash)formData.append('chunkIndex', chunkIndex)// await axios.post('upload', formData)const res = await fetch('https://localhost:7189/api/Worker/Upload', {method: 'POST',body: formData,})chunks[chunkIndex].isUploaded = true // 已上传片段逻辑处理const isAllUploaded = chunks.every(({isUploaded}: any) => isUploaded)if (isAllUploaded) {mergeChunks(file.name) // 合并切片逻辑处理}}}}// 04 合并切片逻辑处理const mergeChunks = async (fileName: string) => {const res = await fetch('https://localhost:7189/api/Worker/Merge', {method: 'POST',headers: { 'Content-Type': 'application/json' },body: JSON.stringify({ fileName: fileName })});console.log(res) }return (<div><input type="file" multiple onChange={handleFileChange} /><button onClick={handleUpload}>Upload</button></div>)
}export default index

最终实现的效果如下所示:

可以看到我们的数据也是被成功的插入到数据库当中了:

这里给出后端代码,感兴趣的朋友可以了解一下:

using Microsoft.AspNetCore.Mvc;
using SqlSugar;
using System.Security.Cryptography;
using webapi_study.Models;namespace FileUploadApi.Controllers
{[ApiController][Route("api/[controller]/[action]")]public class WorkerController : ControllerBase{private readonly ISqlSugarClient reponsitory;private const string UploadFolder = "uploads";private const string MergedDirectory = "merged";// 构造函数注入 SqlSugar 实例public WorkerController(ISqlSugarClient reponsitory){this.reponsitory = reponsitory;}// 计算文件的哈希值private string CalculateHash(IFormFile file){using (var md5 = MD5.Create()){using (var stream = file.OpenReadStream()){var hashBytes = md5.ComputeHash(stream);return BitConverter.ToString(hashBytes).Replace("-", "").ToLowerInvariant();}}}// 获取已上传的切片信息[HttpGet]public IActionResult GetUploadedChunks([FromQuery] string fileName){if (!Directory.Exists(UploadFolder)){return Ok(new { uploadedChunks = new List<string>() });}var files = Directory.GetFiles(UploadFolder);var uploadedChunks = files.Where(file => Path.GetFileName(file).StartsWith(fileName)).ToList();return Ok(new { uploadedChunks });}// 上传单个切片[HttpPost]public async Task<IActionResult> Upload(){try{// 获取表单数据var formCollection = await Request.ReadFormAsync();// 获取文件名var fileName = formCollection["fileName"];// 获取文件块二进制数据var chunkBlob = formCollection.Files["chunkBlob"];// 获取文件块哈希值var chunkHash = formCollection["chunkHash"];// 获取文件块索引var chunkIndex = formCollection["chunkIndex"];// 计算接收到的文件切片的哈希值string calculatedHash = CalculateHash(chunkBlob);// 比较计算得到的哈希值与前端传递的哈希值if (calculatedHash != chunkHash){return BadRequest(new { message = "文件切片哈希值验证失败,可能文件已损坏。" });}var fileDirectory = Path.Combine(UploadFolder, fileName);if (!Directory.Exists(fileDirectory)){Directory.CreateDirectory(fileDirectory);}var chunkFilePath = Path.Combine(fileDirectory, $"{fileName}-{chunkIndex}");using (var stream = new FileStream(chunkFilePath, FileMode.Create)){await chunkBlob.CopyToAsync(stream);}return Ok(new { message = "上传文件成功!" });}catch (Exception ex){return StatusCode(500, new { message = $"上传文件时发生错误: {ex.Message}" });}}// 合并切片[HttpPost]public IActionResult Merge([FromBody] WorkerMergeRequest request){if (!Directory.Exists(MergedDirectory)){Directory.CreateDirectory(MergedDirectory);}var filePath = Path.Combine(MergedDirectory, request.FileName);try{// 合并所有分片文件using (var writeStream = new FileStream(filePath, FileMode.Create)){int chunkIndex = 0;while (true){var chunkFilePath = Path.Combine(UploadFolder, request.FileName, $"{request.FileName}-{chunkIndex}");Console.WriteLine($"尝试读取切片文件: {chunkFilePath}");if (!System.IO.File.Exists(chunkFilePath)){Console.WriteLine($"切片文件不存在,停止合并: {chunkFilePath}");break;}using (var readStream = new FileStream(chunkFilePath, FileMode.Open)){Console.WriteLine($"正在读取切片文件: {chunkFilePath}");readStream.CopyTo(writeStream);}// 删除已合并的分片文件System.IO.File.Delete(chunkFilePath);Console.WriteLine($"已删除切片文件: {chunkFilePath}");chunkIndex++;}}// 获取文件大小var fileInfo = new FileInfo(filePath);long fileSize = fileInfo.Length;// 读取文件内容到字节数组byte[] fileContent = System.IO.File.ReadAllBytes(filePath);// 生成文件预览地址,这里假设使用相对路径,你可以根据实际情况修改string webUrl = $"/{MergedDirectory}/{request.FileName}";// 创建 FileUpload 模型实例var fileUpload = new FileUpload{Id = Guid.NewGuid(),Name = request.FileName,Size = fileSize,WebUrl = webUrl,UploadTime = DateTime.Now,};// 插入到数据库var insertResult = reponsitory.Insertable(fileUpload).ExecuteCommand();if (insertResult > 0){return Ok(new { message = "File merged successfully and inserted into database.", filePath });}else{return StatusCode(500, new { message = "Failed to insert file information into database." });}}catch (Exception ex){return StatusCode(500, new { message = "Failed to merge file.", error = ex.Message });}}// 定义接收合并请求的类public class WorkerMergeRequest{public string FileName { get; set; }}}
}

断点续传

断点续传:是指在上传大文件时如果上传过程中断(例如网络中断)能够从中断的地方继续上传而不是从头开始上传,实现断点续传需要分片上传和记录上传进度,这里简要介绍一下前端如何实现断点续传,其核心步骤如下所示:

1)文件分片:将大文件切割成多个小块(分片),每个分片是一个独立的文件上传单元,一般会设置每个分片的大小(例如1MB或5MB)以减小上传失败时的重试成本

2)获取上传进度:在每次上传之前前端需要记录已经上传的文件分片的进度(例如:哪些分片已经成功上传)

3)检查已上传的分片:在上传新文件时前端首先会向服务器请求已上传的分片信息,服务器返回一个列表告诉前端哪些分片已经成功上传

4)上传分片:每次上传一个分片,上传过程中可以跟踪进度并在上传完成时通知服务器,一旦所有分片都上传完毕前端可以通知服务器合并这些分片成完整的文件或者服务器在上传过程中自动合并

import React, { useState, useRef } from 'react';const CHUNK_SIZE = 1024 * 1024; // 每个块的大小为 1MBconst BreakpointUpload = () => {const [file, setFile] = useState(null);const [uploadedChunks, setUploadedChunks] = useState([]);const [isUploading, setIsUploading] = useState(false);const fileInputRef = useRef(null);const handleFileChange = (e) => {const selectedFile = e.target.files[0];if (selectedFile) {setFile(selectedFile);setUploadedChunks([]);}};const uploadChunk = async (chunk, index) => {const formData = new FormData();formData.append('file', chunk);formData.append('filename', file.name);formData.append('chunkIndex', index);formData.append('totalChunks', Math.ceil(file.size / CHUNK_SIZE));try {const response = await fetch('/upload', {method: 'POST',body: formData});if (response.ok) {setUploadedChunks(prev => [...prev, index]);} else {console.error('上传块失败:', response.statusText);}} catch (error) {console.error('上传块时发生错误:', error);}};const startUpload = async () => {if (!file) return;setIsUploading(true);const totalChunks = Math.ceil(file.size / CHUNK_SIZE);for (let i = 0; i < totalChunks; i++) {if (uploadedChunks.includes(i)) continue;const start = i * CHUNK_SIZE;const end = Math.min(start + CHUNK_SIZE, file.size);const chunk = file.slice(start, end);await uploadChunk(chunk, i);}setIsUploading(false);console.log('文件上传完成');};return (<div><input type="file" ref={fileInputRef} onChange={handleFileChange} /><button onClick={startUpload} disabled={isUploading || !file}>{isUploading ? '上传中...' : '开始上传'}</button></div>);
};export default BreakpointUpload;
关键字:武汉静默5天_莱芜网站优化平台_如何做到精准客户推广_软文广告有哪些

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

责任编辑: