Abstract:
For writing data to multi-track tape, a received data set is received and segmented into unencoded subdata sets, each comprising an array having K2 rows and K1 columns. For each unencoded subdata set, N1−K1 C1-parity bytes are generated for each row and N2−K2 C2-parity bytes are generated for each column. The C1 and C2 parity bytes are appended to the ends of the row and column, respectively, to form encoded C1 and C2 codewords, respectively. All of the C1 codewords per data set are endowed with a specific codeword header to form a plurality of partial codeword objects (PCOs). Each PCO is mapped onto a logical data track according to information within the header. On each logical data track, adjacent PCOs are merged to form COs which are modulation encoded and mapped into synchronized COs. Then T synchronized COs are written simultaneously to the data tape where T is the number of concurrent active tracks on the data tape.
Abstract:
Provided is a technology capable of managing the processing status of hardware blocks by a less number of registers. A processing system includes a buffer composed of a plurality of segments which store data, which is to be input to the processing system, in transactions in the order of inputting, respectively; a plurality of processing units which perform a series of processes in a given order for the data; a plurality of first tables corresponding to the plurality of processing units, respectively, the first tables each storing beginning information which indicates a beginning segment among a plurality of segments at continuous addresses completed in the process by the corresponding processing unit, end information which indicates an end segment among them, and existence information which indicates the presence or absence of segments completed in the process by the corresponding processing unit; and a management unit which manages a data transfer between the buffer and the plurality of processing units so that the series of processes are performed in a given order on the basis of the processing status of the series of processes retained in the plurality of first tables.
Abstract:
A method for decoding encoded data comprising integrated data and header protection is disclosed herein. In one embodiment, such a method includes receiving an extended data array. The extended data array includes a data array organized into rows and columns, headers appended to the rows of the data array, column ECC parity protecting the columns of the data array, and row ECC parity protecting the rows and headers combined. The method then decodes the extended data array. Among other operations, this decoding step includes checking the header associated with each row to determine whether the header is legal. If the header is legal, the method determines the contribution of the header to the corresponding row ECC parity. The method then reverses the contribution of the header to the corresponding row ECC parity. A corresponding apparatus (i.e., a tape drive configured to implement the above-described method) is also disclosed herein.
Abstract:
A method for integrating data and header protection in tape drives includes receiving an array of data organized into rows and columns. The array is extended to include one or more headers for each row of data in the array. The method provides two dimensions of error correction code (ECC) protection for the data in the array and a single dimension of ECC protection for the headers in the array. A corresponding apparatus is also disclosed herein.
Abstract:
A method for equalizing the bandwidth of requesters using a shared memory system is disclosed. In one embodiment, such a method includes receiving multiple access requests to access a shared memory system. Each access request originates from a different requester coupled to the shared memory system. The method then determines which of the access requests has been waiting the longest to access the shared memory system. The access requests are then ordered so that the access request that has been waiting the longest is transmitted to the shared memory system after the other access requests. The requester associated with the longest-waiting access request may then transmit additional access requests to the shared memory system immediately after the longest-waiting access request has been transmitted. A corresponding apparatus and computer program product are also disclosed.
Abstract:
A magnetic tape drive having a tape drive system for moving magnetic tape, tape read/write and servo system, tape cartridge load/unload systems, I/O communications, memory; and a control system, operates in three modes to conserve energy consumption. A first low power mode powers the I/O communications, the memory, and the control system. If a magnetic tape cartridge is in loaded position in the magnetic tape drive, the second low power mode powers the same as the first low power mode, and additionally powers the tape drive system to apply tension to a magnetic tape of the magnetic tape cartridge. In the first and the second low power modes, the control system operates the I/O communications, the memory and the control system to respond to and execute commands received at the I/O communications if the commands are executable without magnetic tape access. The third, full power mode, is entered if a command received at the I/O communications requires magnetic tape access.
Abstract:
A method for decoding encoded data comprising integrated data and header protection is disclosed herein. In one embodiment, such a method includes receiving an extended data array. The extended data array includes a data array organized into rows and columns, headers appended to the rows of the data array, column ECC parity protecting the columns of the data array, and row ECC parity protecting the rows and headers combined. The method then decodes the extended data array. Among other operations, this decoding step includes checking the header associated with each row to determine whether the header is legal. If the header is legal, the method determines the contribution of the header to the corresponding row ECC parity. The method then reverses the contribution of the header to the corresponding row ECC parity. A corresponding apparatus (i.e., a tape drive configured to implement the above-described method) is also disclosed herein.
Abstract:
A method for transferring corrected data to an external buffer within a tape drive is provided. After the receipt of data from a data recording medium, the data are stored in an external buffer. The data are then transferred from the external buffer to an error correction code (ECC) device. Any error in the data within the ECC device are corrected. The corrected data are subsequently divided into multiple sub-units, and a transfer flag is added to each of the sub-units having corrected data. Only the sub-units having corrected data are transferred from the ECC device back to the external buffer.
Abstract:
Methods, logic, apparatus and computer program product write data, comprising less than a full Data Set, to magnetic tape. Data is received from a host, a do-not-interleave command is issued and C1 and C2 ECC are computed. Codeword Quad (CQ) sets are then formed. At least one CQ set of the Data Set is written to a magnetic tape in a non-interleaved manner and a Data Set Information Table (DSIT) is written to the magnetic tape immediately following the at least one written CQ set. An address transformation may be used to cancel interleaving. Writing a CQ set may include writing a plurality of contiguous instances of the CQ set to the magnetic tape to maintain the effectiveness of ECC capability.
Abstract:
The present invention includes a plurality of CPUs using memory as main memory, another function block using memory as a buffer, a CPU interface which controls access transfer from the plurality of CPUs to memory, and a DRAM controller for performing arbitration of the access transfer to the memory. Therein, the CPU interface causes access requests from the plurality of CPUs to wait, and receives and stores the address, data transfer mode and data size of each access, notifies the DRAM controller of the access requests, and then, upon receiving grant signals for the access requests, sends information to the DRAM controller according to the grant signals, whereupon the DRAM controller receives the grant signals, and on the basis of the access arbitration, specifies CPUs for which transfers have been granted so as to send the grant signals to the CPU interface.