Options
All
  • Public
  • Public/Protected
  • All
Menu

Class B2Upload

Manages the upload of objects to Backblaze B2.

This supports using Buffers and strings with the "simple APIs". It supports streams too, using either the "simple APIs" if the stream is less than chunkSize, or the large file APIs otherwise. The selection happens automatically.

Hierarchy

  • B2Upload

Index

Constructors

constructor

  • new B2Upload(client: any, bucketId: string, path: string, data: Stream | string | Buffer, metadata?: any, length?: number): B2Upload
  • Initializes a new B2Upload class

    Parameters

    • client: any

      Instance of the B2 client library. It's expected authorization to be completed already, so auth data is stored in the library.

    • bucketId: string

      Id of the target bucket

    • path: string

      Path where to store the object, inside the container

    • data: Stream | string | Buffer

      Data to upload

    • Optional metadata: any

      Metadata for the object

    • Optional length: number

      Length (in bytes) of the input data

    Returns B2Upload

Properties

Protected bucketId

bucketId: string

Id of the target bucket

Protected client

client: any

Instance of the B2 client library

Protected data

data: Stream | string | Buffer

Data to upload

Protected length

length: number

Length (in bytes) of the input data

Protected metadata

metadata: any

Metadata for the object

Protected path

path: string

Path where to store the object, inside the container

Static chunkSize

chunkSize: number = 9 * 1024 * 1024

Size of each chunk that is uploaded when using B2's large file APIs, in bytes. Minimum value is 5MB; default is 9MB.

Note: there seems to be a bug in the current version of the backblaze-b2 package when the request body upload is > 10 MB, because of a downstream dependency on axios@0.17; once backblaze-b2 updates its dependency on axios, this might be fixed.

Static retries

retries: number = 3

Backblaze recommends retrying all uploads at least two times (up to five) in case of errors, with an incrementing delay. We're retrying all uploads 3 times by default.

Methods

Private putFile

  • putFile(data?: Buffer): Promise<any>
  • Uploads a single file, when data is a Buffer or string.

    async

    Parameters

    • Optional data: Buffer

      Data to upload, as Buffer. If not specified, will use this.data

    Returns Promise<any>

    Promise that resolves when the object has been uploaded

Private putLargeFile

  • putLargeFile(stream?: Readable): Promise<any>
  • Uploads a Readable Stream.

    async

    Parameters

    • Optional stream: Readable

      Readable Stream containing the data to upload

    Returns Promise<any>

    Promise that resolves when the object has been uploaded

Private putPart

  • putPart(fileId: string, partNumber: number, data: Buffer): Promise<any>
  • Uploads a single part of a large file.

    async

    Parameters

    • fileId: string

      ID of the large file that is being uploaded

    • partNumber: number

      Number of the part, starting from 1

    • data: Buffer

      Data to upload, in a Buffer

    Returns Promise<any>

    Promise that resolves when the part has been uploaded.

start

  • start(): Promise<void>
  • Start the upload of the object

    async

    Returns Promise<void>

    Promise that resolves when the object has been uploaded

Generated using TypeDoc