开发者

Pytorch中TensorDataset,DataLoader的联合使用方式

开发者 https://www.devze.com 2023-02-21 09:31 出处:网络 作者: 拥抱晨曦之温暖
目录Pytorch中TensorDataset,DataLoader的联合使用来个小例子Pytorch的DataLoader和Dataset以及TensorDataset的源码分析1.为什么要用DataLoader和Dataset2.Dateset的使用3.TensorDataset的使用总结Pytorch中TensorDa
目录
  • Pytorch中TensorDataset,DataLoader的联合使用
    • 来个小例子
  • Pytorch的DataLoader和Dataset以及TensorDataset的源码分析
    • 1.为什么要用DataLoader和Dataset
    • 2.Dateset的使用
    • 3.TensorDataset的使用
  • 总结

    Pytorch中TensorDataset,DataLoader的联合使用

    首先从字面意义上来理解TensorDataset和DataLoader,TensorDataset是个只用来存放tensor(张量)的数据集,而DataLoader是一个数据加载器,一般用到DataLoader的时候就说明需要遍历和操作数据了。

    TensorDataset(tensor1,tensor2)的功能就是形成数据tensor1和标签tensor2的对应,也就是说tensor1中是数据,而tensor2是tensor1所对应的标签。

    来个小例子

    from torch.utils.data import TensorDataset,DataLoader
    import torch
    
    a = torch.tensor([[1, 2, 3],
             zHVnokd[4, 5, 6],
             [7, 8, 9],
             [1, 2, 3],
             [4, 5, 6],
             [7, 8, 9],
             [1, 2, 3],
             [4, 5, 6],
             [7, 8, 9],
             [1, 2, 3],
             [4, 5, 6],
             [7, 8, 9]])
    
    b = torch.tensor([44, 55, 66, 44, 55, 66, 44, 55, 66, 44, 55, 66])
    train_ids = TensorDataset(a,b)
    # 切片输出
    print(train_ids[0:4]) # 第0,1,2,3行
    # 循环取数据
    for x_train,y_label in train_ids:
      print(x_train,y_label)

    下面是对应的输出:

    (tensor([[1, 2, 3],

            [4, 5, 6],

            [7, 8, 9],

            [1, 2, 3]]), tensor([44, 55, 66, 44]))

    ===============================================

    tensor([1, 2, 3]) tensor(44)

    tensor([4, 5, 6]) tensor(55)

    tensor([7, 8, 9]) tensor(66)

    tensor([1, 2, 3]) tensor(44)

    tensor([4, 5, 6]) tensor(55)

    tensor([7, 8, 9]) tensor(66)

    tensor([1, 2, 3]) tensor(44)

    tensor([4, 5, 6]) tensor(55)

    tensor([7, 8, 9]) tensor(66)

    tensor([1, 2, 3]) tensor(44)

    tensor([4, 5, 6]) tensor(55)

    tensor([7, 8, 9]) tensor(66)

    从输出结果我们就可以很好的理解,tensor型数据和tensor型标签的对应了,这就是TensorDataset的基本应用。

    接下来我们把构造好的TensorDataset封装到DataLoader来操作里面的数据:

    # 参数说明,dataset=train_ids表示需要封装的数据集,BATch_size表示一次取几个
    # shuffle表示乱序取数据,设为False表示顺序取数据,True表示乱序取数据
    train_loader = DataLoader(dataset=train_ids,batch_size=4,shuffle=False)
    # 注意enumerate返回值有两个,一个是序号,一个是数据(包含训练数据和标签)
    for i,data in 编程客栈enumerate(train_loader,1):
      train_dat开发者_C入门a, label = data
      print(' batch:{0} train_data:{1} label: {2}'.format(i+1, train_data, label))

    下面是对应的输出:

     batch:1 x_data:tensor([[1, 2, 3],

            [4, 5, 6],

            [7, 8, 9],

            [1, 2, 3]])  label: tensor([44, 55, 66, 44])

     batch:2 x_data:tensor([[4, 5, 6],

            [7, 8, 9],

            [1, 2, 3],

            [4, 5, 6]])  label: tensor([55, 66, 44, 55])

     batch:3 x_data:tensor([[7, 8, 9],

            [1, 2, 3],

            [4, 5, 6],

            [7, 8, 9]])  label: tensor([66, 44, 55, 66])

    至此,TensorDataset和DataLoader的联合使用就介绍完了。

    我们再看一下这两种方法的源码:

    class TensorDataset(Dataset[Tuple[Tensor, ...]]):
      r"""Dataset wrapping tensors.
      Each sample will be retrieved by indexing tensors along the first dimension.
      Arguments:
        *tensors (Tensor): tensors thphpat have the same size of the first dimension.
      """
      tensors: Tuple[Tensor, ...]
    
      def __init__(self, *tensors: Tensor) -> None:
        assert all(tensors[0].size(0) == tensor.size(0) for tensor in tensors)
        self.tensors = tensors
    
      def __getitem__(self, index):
        return tuple(tensor[index] for tensor in self.tensors)
    
      def __len__(self):
        return self.tensors[0].size(0)
    
    # 由于此类内容过多,故仅列举了与本文相关的参数,其余参数可以自行去查看源码
    class DataLoader(Generic[T_co]):
      r"""
      Data loader. Combines a dataset and a sampler, and provides an iterable over
      the given dataset.
      The :class:`~torch.utils.data.DataLoader` supports both map-style and
      iterable-style datasets with single- or multi-process loading, customizing
      loading order and optional automatic batching (collation) and memory pinning.
      See :py:mod:`torch.utils.data` documentation page for more details.
      Arguments:
        dataset (Dataset): dataset from which to load the data.
        batch_size (int, optional): how many samples per batch to load
          (default: ``1``).
        shuffle (bool, optional): set to ``True`` to have编程 the data reshuffled
          at every epoch (default: ``False``).
      """
      dataset: Dataset[T_co]
      batch_size: Optional[int]
    
      def __init__(self, dataset: Dataset[T_co], batch_size: Optional[int] = 1,
            shuffle: bool = False):
    
        self.dataset = dataset
        self.batch_size = batch_size

    Pytorch的DataLoader和Dataset以及TensorDataset的源码分析

    1.为什么要用DataLoader和Dataset

    要对大量数据进行加载和处理时因为可能会出现内存不够用的情况,这时候就需要用到数据集类Dataset或TensorDataset和数据集加载类DataLoader了。

    使用这些类后可以将原本的数据分成小块,在需要使用的时候再一部分一本分读进内存中,而不是一开始就将所有数据读进内存中。

    2.Dateset的使用

    pytorch中的torch.utils.data.Dataset是表示数据集的抽象类,但它一般不直接使用,而是通过自定义一个数据集来使用。

    来自定义数据集应该继承Dataset并应该有实现返回数据集尺寸的__len__方法和用来获取索引数据的__getitem__方法。

    Dataset类的源码如下:

    class Dataset(object):
      r"""An abstract class representing a :class:`Dataset`.
    
      All datasets that represent a map from keys to data samples should subclass
      it. All subclasses should overwrite :meth:`__getitem__`, supporting fetching a
      data sample for a given key. Subclasses could also optionally overwrite
      :meth:`__len__`, which is expected to return the size of the dataset by many
      :class:`~torch.utils.data.Sampler` implementations and the default options
      of :class:`~torch.utils.data.DataLoader`.
    
      .. note::
       :class:`~torch.utils.data.DataLoader` by default constructs a index
       sampler that yields integral indices. To make it work with a map-style
       dataset with non-integral indices/keys, a custom sampler must be provided.
      """
    
      def __getitem__(self, index):
        raise NotImplementedError
    
      def __add__(self, other):
        return ConcatDataset([self, other])
    
      # No `def __len__(self)` default?
      # See NOTE [ Lack of Default `__len__` in python Abstract Base Classes ]
      # in pytorch/torch/utils/data/sampler.py

    可以看到Dataset类中没有__len__方法,虽然有__getitem__方法,但是并没有实现啥有用的功能。

    所以要写一个Dataset类的子类来实现其应有的功能。

    自定义类的实现举例:

    import torch
    from torch.utils.data import Dataset, DataLoader, TensorDataset
    from torch.autograd import Variable
    import numpy as np
    import pandas as pd
    
    value_df = pd.read_csv('data1.csv')
    value_array = np.array(value_df)
    print("value_array.shape =", value_array.shape) # (73700, 300)
    value_size = value_array.shape[0] # 73700
    train_size = int(0.7*value_size)
    
    train_array = val_array[:train_size] 
    train_label_array = val_array[60:train_size+60]
    
    class DealDataset(Dataset):
      """
        下载数据、初始化数据,都可以在这里完成
      """
    
      def __init__(self, *arrays):
        assert all(arrays[0].shape[0] == array.shape[0] for array in arrays)
        self.arrays = arrays
    
      def __getitem__(self, index):
        return tuple(array[index] for array in self.arrays)
    
      def __len__(self):
        return self.arrays[0].shape[0]
    
    
    # 实例化这个类,然后我们就得到了Dataset类型的数据,记下来就将这个类传给DataLoader,就可以了。
    train_dataset = DealDataset(train_array, train_label_array)
    
    train_loader2 = DataLoader(dataset=train_dataset,
                 batch_size=32,
                 shuffle=True)
    
    for epoch in range(2):
      for i, data in enumerate(train_loader2):
        # 将数据从 train_loader 中读出来,一次读取的样本数是32个
        inputs, labels = data
    
        # 将这些数据转换成Variable类型
        inputs, labels = Variable(inputs), Variable(labels)
    
        # 接下来就是跑模型的环节了,我们这里使用print来代替
        print("epoch:", epoch, "的第", i, "个inputs", inputs.data.size(), "labels", labels.data.size())

    结果:

    epoch: 0 的第 0 个inputs torch.Size([32, 300]) labels torch.Size([32, 300])

    epoch: 0 的第 1 个inputs torch.Size([32, 300]) labels torch.Size([32, 300])

    epoch: 0 的第 2 个inputs pythontorch.Size([32, 300]) labels torch.Size([32, 300])

    epoch: 0 的第 3 个inputs torch.Size([32, 300]) labels torch.Size([32, 300])

    epoch: 0 的第 4 个inputs torch.Size([32, 300]) labels torch.Size([32, 300])

    epoch: 0 的第 5 个inputs torch.Size([32, 300]) labels torch.Size([32, 300])

    ...

    3.TensorDataset的使用

    TensorDataset是可以直接使用的数据集类,它的源码如下:

    class TensorDataset(Dataset):
      r"""Dataset wrapping tensors.
    
      Each sample will be retrieved by indexing tensors along the first dimension.
    
      Arguments:
        *tensors (Tensor): tensors that have the same size of the first dimension.
      """
    
      def __init__(self, *tensors):
        assert all(tensors[0].size(0) == tensor.size(0) for tensor in tensors)
        self.tensors = tensors
    
      def __getitem__(self, index):
        return tuple(tensor[index] for tensor in self.tensors)
    
      def __len__(self):
        return self.tensors[0].size(0)

    可以看到TensorDataset类是Dataset类的子类,且拥有返回数据集尺寸的__len__方法和用来获取索引数据的__getitem__方法,所以可以直接使用。

    它的结构跟上面自定义的子类的结构是一样的,惟一的不同是TensorDataset已经规定了传入的数据必须是torch.Tensor类型的,而自定义子类可以自由设定。

    使用举例:

    import torch
    from torch.utils.data import Dataset, DataLoader, TensorDataset
    from torch.autograd import Variable
    import numpy as np
    import pandas as pd
    
    value_df = pd.read_csv('data1.csv')
    value_array = np.array(value_df)
    print("value_array.shape =", value_array.shape) # (73700, 300)
    value_size = value_array.shape[0] # 73700
    train_size = int(0.7*value_size)
    
    train_array = val_array[:train_size] 
    train_tensor = torch.tensor(train_array, dtype=torch.float32).to(device)
    train_label_array = val_array[60:train_size+60]
    train_labels_tensor = torch.tensor(train_label_array,dtype=torch.float32).to(device)
    
    train_dataset = TensorDataset(train_tensor, train_labels_tensor)
    train_loader = DataLoader(dataset=train_dataset,
                 batch_size=100,
                 shuffle=False,
                 num_workers=0)
    
    for epoch in range(2):
      for i, data in enumerate(train_loader):
        inputs, labels = data
        inputs, labels = Variable(inputs), Variable(labels)
        print(epoch, i, "inputs", inputs.data.size(), "labels", labels.data.size())

    结果:

    0 0 inputs torch.Size([100, 300]) labels torch.Size([100, 300])

    0 1 inputs torch.Size([100, 300]) labels torch.Size([100, 300])

    0 2 inputs torch.Size([100, 300]) labels torch.Size([100, 300])

    0 3 inputs torch.Size([100, 300]) labels torch.Size([100, 300])

    0 4 inputs torch.Size([100, 300]) labels torch.Size([100, 300])

    0 5 inputs torch.Size([100, 300]) labels torch.Size([100, 300])

    0 6 inputs torch.Size([100, 300]) labels torch.Size([100, 300])

    0 7 inputs torch.Size([100, 300]) labels torch.Size([100, 300])

    0 8 inputs torch.Size([100, 300]) labels torch.Size([100, 300])

    0 9 inputs torch.Size([100, 300]) labels torch.Size([100, 300])

    0 10 inputs torch.Size([100, 300]) labels torch.Size([100, 300])

    ...

    总结

    以上为个人经验,希望能给大家一个参考,也希望大家多多支持我们。

    0

    精彩评论

    暂无评论...
    验证码 换一张
    取 消

    关注公众号