当前位置: 首页> 健康> 养生 > 敦化网站建设_宁波高端网站设计厂家_我是新手如何做电商_网络推广方式方法

敦化网站建设_宁波高端网站设计厂家_我是新手如何做电商_网络推广方式方法

时间:2025/7/11 8:10:37来源:https://blog.csdn.net/qq_60245590/article/details/147149925 浏览次数:1次
敦化网站建设_宁波高端网站设计厂家_我是新手如何做电商_网络推广方式方法

本节代码将展示如何在预训练的BERT模型基础上进行微调,以适应特定的下游任务。

学习建议直接看文章最后的需复现代码,不懂得地方再回看

微调是自然语言处理中常见的方法,通过在预训练模型的基础上添加额外的层,并在特定任务的数据集上进行训练,可以快速适应新的任务。以下是从模型微调的角度对代码的详细说明:
 

1. 加载预训练模型

self.bert = BertModel.from_pretrained(model_path)
  • 预训练模型:使用 transformers 库的 BertModel.from_pretrained 方法加载一个预训练的BERT模型。model_path 是预训练模型的路径或名称,例如 "bert-base-chinese"

  • 优势

    • 预训练模型已经在大规模语料上进行了训练,学习了通用的语言表示。

    • 微调可以利用这些预训练的参数,快速适应新的任务,通常只需要较少的数据和训练时间。

2. 添加任务特定的头

self.mlm_head = nn.Linear(d_model, vocab_size)
self.nsp_head = nn.Linear(d_model, 2)
  • MLM头mlm_head 是一个线性层,用于预测被掩盖的单词。输入是BERT模型的输出,输出是词汇表大小的预测概率。

  • NSP头nsp_head 是一个线性层,用于预测两个句子是否相邻。输入是BERT模型的 [CLS] 标记的输出,输出是二分类的概率。

3. 前向传播

def forward(self, mlm_tok_ids, seg_ids, mask):bert_out = self.bert(mlm_tok_ids, seg_ids, mask)output = bert_out.last_hidden_statecls_token = output[:, 0, :]mlm_logits = self.mlm_head(output)nsp_logits = self.nsp_head(cls_token)return mlm_logits, nsp_logits
  • BERT模型的输出

    • bert_out.last_hidden_state:BERT模型的输出,形状为 (batch_size, seq_len, d_model)

    • [CLS] 标记的输出:output[:, 0, :],用于NSP任务。

  • 任务特定的输出

    • mlm_logits:MLM任务的预测结果。

    • nsp_logits:NSP任务的预测结果。

4. 数据处理

class BERTDataset(Dataset):def __init__(self, nsp_dataset, tokenizer: BertTokenizer, max_length):self.nsp_dataset = nsp_datasetself.tokenizer = tokenizerself.max_length = max_lengthself.cls_id = tokenizer.cls_token_idself.sep_id = tokenizer.sep_token_idself.pad_id = tokenizer.pad_token_idself.mask_id = tokenizer.mask_token_iddef __getitem__(self, idx):sent1, sent2, nsp_label = self.nsp_dataset[idx]sent1_ids = self.tokenizer.encode(sent1, add_special_tokens=False)sent2_ids = self.tokenizer.encode(sent2, add_special_tokens=False)tok_ids = [self.cls_id] + sent1_ids + [self.sep_id] + sent2_ids + [self.sep_id]seg_ids = [0]*(len(sent1_ids)+2) + [1]*(len(sent2_ids) + 1)mlm_tok_ids, mlm_labels = self.build_mlm_dataset(tok_ids)mlm_tok_ids = self.pad_to_seq_len(mlm_tok_ids, 0)seg_ids = self.pad_to_seq_len(seg_ids, 2)mlm_labels = self.pad_to_seq_len(mlm_labels, -100)mask = (mlm_tok_ids != 0)return {"mlm_tok_ids": mlm_tok_ids,"seg_ids": seg_ids,"mask": torch.tensor(mask, dtype=torch.long),"mlm_labels": mlm_labels,"nsp_labels": torch.tensor(nsp_label)}
  • 数据处理

    • 将文本数据转换为词索引(tok_ids)。

    • 添加特殊标记([CLS][SEP])。

    • 生成段嵌入(seg_ids)。

    • 生成MLM任务的数据(mlm_tok_idsmlm_labels)。

    • 填充或截断序列到固定长度(max_length)。

  • 掩码:生成掩码,用于标记哪些位置是有效的输入(非填充部分)。

5. 训练过程

for epoch in range(epochs):for batch in tqdm(trainloader, desc="Training"):batch_mlm_tok_ids = batch["mlm_tok_ids"]batch_seg_ids = batch["seg_ids"]batch_mask = batch["mask"]batch_mlm_labels = batch["mlm_labels"]batch_nsp_labels = batch["nsp_labels"]mlm_logits, nsp_logits = model(batch_mlm_tok_ids, batch_seg_ids, batch_mask)loss_mlm = loss_fn(mlm_logits.view(-1, vocab_size), batch_mlm_labels.view(-1))loss_nsp = loss_fn(nsp_logits, batch_nsp_labels)loss = loss_mlm + loss_nsploss.backward()optim.step()optim.zero_grad()print("Epoch: {}, MLM Loss: {}, NSP Loss: {}".format(epoch, loss_mlm, loss_nsp))
  • 训练步骤

    • 前向传播:将输入数据通过模型,得到MLM和NSP任务的预测结果。

    • 计算损失:分别计算MLM和NSP任务的损失。

    • 反向传播:计算梯度并更新模型参数。

    • 优化器:使用Adam优化器,学习率设置为 1e-3

  • 进度条:使用 tqdm 显示训练进度,使训练过程更加直观。

6. 微调的优势

  • 快速适应新任务:预训练模型已经学习了通用的语言表示,微调可以快速适应新的任务,通常只需要较少的数据和训练时间。

  • 节省计算资源:从头训练BERT模型需要大量的计算资源和时间,而微调只需要在预训练模型的基础上进行少量的训练。

  • 更好的性能:预训练模型在大规模数据上进行了训练,通常具有更好的性能。微调可以进一步提升模型在特定任务上的表现。

需复现代码


import re
import math
import torch
import random
import torch.nn as nnfrom tqdm import tqdm
from transformers import BertTokenizer, BertModel
from torch.utils.data import Dataset, DataLoaderclass BERT(nn.Module):def __init__(self, vocab_size, d_model, seq_len, N_blocks, num_heads, dropout, dff):super().__init__()self.bert = BertModel.from_pretrained(model_path)self.mlm_head = nn.Linear(d_model, vocab_size)self.nsp_head = nn.Linear(d_model, 2)def forward(self, mlm_tok_ids, seg_ids, mask):bert_out = self.bert(mlm_tok_ids, seg_ids, mask)output = bert_out.last_hidden_statecls_token = output[:, 0, :]mlm_logits = self.mlm_head(output)nsp_logits = self.nsp_head(cls_token)return mlm_logits, nsp_logitsdef read_data(file):with open(file, "r", encoding="utf-8") as f:data = f.read().strip().replace("\n", "")corpus = re.split(r'[。,“”:;!、]', data)corpus = [sentence for sentence in corpus if sentence.strip()]return corpusdef create_nsp_dataset(corpus):nsp_dataset = []for i in range(len(corpus)-1):next_sentence = corpus[i+1]rand_id = random.randint(0, len(corpus) - 1)while abs(rand_id - i) <= 1:rand_id = random.randint(0, len(corpus) - 1)negt_sentence = corpus[rand_id]nsp_dataset.append((corpus[i], next_sentence, 1)) # 正样本nsp_dataset.append((corpus[i], negt_sentence, 0)) # 负样本return nsp_datasetclass BERTDataset(Dataset):def __init__(self, nsp_dataset, tokenizer: BertTokenizer, max_length):self.nsp_dataset = nsp_datasetself.tokenizer = tokenizerself.max_length = max_lengthself.cls_id = tokenizer.cls_token_idself.sep_id = tokenizer.sep_token_idself.pad_id = tokenizer.pad_token_idself.mask_id = tokenizer.mask_token_iddef __len__(self):return len(self.nsp_dataset)def __getitem__(self, idx):sent1, sent2, nsp_label = self.nsp_dataset[idx]sent1_ids = self.tokenizer.encode(sent1, add_special_tokens=False)sent2_ids = self.tokenizer.encode(sent2, add_special_tokens=False)tok_ids = [self.cls_id] + sent1_ids + [self.sep_id] + sent2_ids + [self.sep_id]seg_ids = [0]*(len(sent1_ids)+2) + [1]*(len(sent2_ids) + 1)mlm_tok_ids, mlm_labels = self.build_mlm_dataset(tok_ids)mlm_tok_ids = self.pad_to_seq_len(mlm_tok_ids, 0)seg_ids = self.pad_to_seq_len(seg_ids, 2)mlm_labels = self.pad_to_seq_len(mlm_labels, -100)mask = (mlm_tok_ids != 0)return {"mlm_tok_ids": mlm_tok_ids,"seg_ids": seg_ids,"mask": torch.tensor(mask, dtype=torch.long),"mlm_labels": mlm_labels,"nsp_labels": torch.tensor(nsp_label)}def pad_to_seq_len(self, seq, pad_value):seq = seq[:self.max_length]pad_num = self.max_length - len(seq)return torch.tensor(seq + pad_num * [pad_value], dtype=torch.long)def build_mlm_dataset(self, tok_ids):mlm_tok_ids = tok_ids.copy()mlm_labels = [-100] * len(tok_ids)for i in range(len(tok_ids)):if tok_ids[i] not in [self.cls_id, self.sep_id, self.pad_id]:if random.random() < 0.15:mlm_labels[i] = tok_ids[i]if random.random() < 0.8:mlm_tok_ids[i] = self.mask_idelif random.random() < 0.9:mlm_tok_ids[i] = random.randint(106, self.tokenizer.vocab_size - 1)return mlm_tok_ids, mlm_labelsif __name__ == "__main__":data_file = "4.10-BERT/背影.txt"model_path = "/Users/azen/Desktop/llm/models/bert-base-chinese"tokenizer = BertTokenizer.from_pretrained(model_path)corpus = read_data(data_file)max_length = 25 # len(max(corpus, key=len))print("Max length of dataset: {}".format(max_length))nsp_dataset = create_nsp_dataset(corpus)trainset = BERTDataset(nsp_dataset, tokenizer, max_length)batch_size = 16trainloader = DataLoader(trainset, batch_size, shuffle=True)vocab_size = tokenizer.vocab_sized_model = 768N_blocks = 2num_heads = 12dropout = 0.1dff = 4*d_modelmodel = BERT(vocab_size, d_model, max_length, N_blocks, num_heads, dropout, dff)lr = 1e-3optim = torch.optim.Adam(model.parameters(), lr=lr)loss_fn = nn.CrossEntropyLoss()epochs = 20for epoch in range(epochs):for batch in tqdm(trainloader, desc = "Training"):batch_mlm_tok_ids = batch["mlm_tok_ids"]batch_seg_ids = batch["seg_ids"]batch_mask = batch["mask"]batch_mlm_labels = batch["mlm_labels"]batch_nsp_labels = batch["nsp_labels"]mlm_logits, nsp_logits = model(batch_mlm_tok_ids, batch_seg_ids, batch_mask)loss_mlm = loss_fn(mlm_logits.view(-1, vocab_size), batch_mlm_labels.view(-1))loss_nsp = loss_fn(nsp_logits, batch_nsp_labels)loss = loss_mlm + loss_nsploss.backward()optim.step()optim.zero_grad()print("Epoch: {}, MLM Loss: {}, NSP Loss: {}".format(epoch, loss_mlm, loss_nsp))passpass

关键字:敦化网站建设_宁波高端网站设计厂家_我是新手如何做电商_网络推广方式方法

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

责任编辑: