Core

From NaplesPU Documentation
Revision as of 17:55, 31 December 2018 by VincenzoS (talk | contribs)
Jump to: navigation, search

The core is based on a RISC in-order pipeline. Its control unit is intentionally kept lightweight. The architecture masks memory and operation latencies by heavily relying on hardware multithreading. By ensuring a light control logic, the core can devote most of its resources for accelerating computing in highly data-parallel kernels. In the hardware multithreading nuplus architecture, each hardware thread has its own PC, register file, and control registers. The number of threads is user configurable. A nuplus hardware thread is equivalent to a wavefront in the AMD terminology and a CUDA warp in the NVIDIA terminology. The processor uses a deep pipeline to improve clock speed.

nu+ microarchitecture

All threads share the same compute units. Execution pipelines are organized in hardware vector lanes (like vector processors, each operator is replicated N times). Each thread can perform a SIMD operation on independent data, while data are organized in a vector register file. The core supports a high-throughput non-coherent scratchpad memory, or SPM (corresponding to the shared memory in the NVIDIA terminology). The SPM is divided in a parameterized number of banks based on a user-configurable mapping function. The memory controller resolves bank collisions at run-time ensuring a correct execution of SPM accesses from concurrent threads. Coherence mechanisms incur a high latency and are not strictly necessary for many applications.

Instruction Fetch Stage

Instruction Fetch stage schedules the next thread PC from the eligible threads pool, handled by the Thread Controller unit. Available threads are scheduled in a Round Robin fashion. Furthermore, at the boot phase, the Thread Controller can initialize each thread PC through a specific interface.

Instruction Fetch Stage

The instruction cache is N-way set associative (by default it has 4 ways and 128 sets of 512 bits each). In here, an SRAM is allocated for each instanciated way, in order to provide parallelism during the tag evaluation phase.

Once an eligible thread is selected, Instruction Fetch reads its PC, and determines whether the next instruction cache line is already in instruction cache or not. Such a module is divided into two stages: in the first stage, each way has a bank of memory containing tag values and validity information for the cache sets. This stage reads requested sets from all the ways in parallel and passes these data to the second stage. The tag memory has one cycle of latency, so the next stage handles the result. This stage compares the way tags previously read with the request tag, if they match, it raises a cache hit. In such a case, the first stage issues the instruction cache data address to instruction cache data memory. If a miss occurs an instruction memory transaction is issued to the Network Interface (or to the Cache Controller in case of Single Core instance) and the thread is blocked until the instruction line is not retrieved from main memory.

Finally, this module handles the PC restoring in case of rollback. When a rollback occurs and the rollback signals are set by Rollback Handler stage, the Instruction Fetch module overwrites the PC of the thread which issued such a rollback.

Thread & PC selection

A thread is selected from all the possible eligible ones using an external signal coming from thread controller unit. Anyway, an internal round robin arbiter issues the threads in a fair mode. A different thread is elected every clock cycle, such a mechanism makes nu+ a fine-grained multithreaded architecture.

The elected thread number selects a specific PC that is modified on the base of some thread-related events: if there is not a cache miss or if there is not a rollback - else the valid signal is invalidated.

if ( tc_job_valid && thread_id == tc_job_thread_id ) // new job
   next_pc[thread_id] <= tc_job_pc;
else if ( rollback_valid[thread_id] ) // rollback
   next_pc[thread_id] <= rollback_pc_value[thread_id];
else if ( stage1_miss[thread_id] && stage1_thread_scheduled_id == thread_id ) // Inst miss
   next_pc[thread_id] <= next_pc[thread_id] - address_t'( 3'd4 );
else if ( thread_scheduled_bitmap[thread_id] )  // Normal execution
   next_pc[thread_id] <= next_pc[thread_id] + address_t'( 3'd4 );

Cache LRU

The Pseduo LRU works in a way described at this link, page 13.

The hit interface is enabled when you want to move a way to the MRU position, i.e. when a hit is performed.

THe update interface is enabled when you want to request LRU way to fill it. This happen when new instruction cahce line arrives from memory. No replacement is needed beacause the instruction memory area is uncoherent.

Tag and Data instruction cache

The tag and data cache are accessed in mode, using the same input. The caches are read-enabled only if the current instruction is valid. The result is validated only if there is one hit.

A line is registered as valid if some instruction are coming from memory for that way. The line_valid is registered per way too. It's important to notice a cut-through for validity check operation: if the instruction updating is relative to and address equals to the selected one, the valid output signal is instantly asserted.

if ( tc_valid_update_cache && way_lru == way )
   line_valid[tc_addr_update.index] <= 1;
if ( tc_valid_update_cache && way_lru == way && tc_addr_update.index == icache_address_selected.index)
   line_valid_selected[way]         <= 1; //cut-through
else
   line_valid_selected[way]         <= line_valid[icache_address_selected.index] & instruction_valid;

In order to have the signal about the current scheduled thread aligned with the available data at the caches output, we need to register this signals: PC, instruction_valid, line_valid, address, thread_id.

Hit/miss detection

There will be a miss only if the line is valid and the tags match. The hit signal is per way (NOT per thread)

for ( way = 0; way < `ICACHE_WAY; way++ ) 
    assign hit_miss[way] = tag_read_data[way] == stage1_icache_address.tag && line_valid_selected[way];

We will register the current thread causing load miss.

for ( thread_id = 0; thread_id < `THREAD_NUMB; thread_id++ )
    assign stage1_miss[thread_id] = ( stage1_thread_scheduled_id == thread_id ) ? ~|hit_miss & stage1_instruction_valid : 1'b0;

Output logic

The cache miss is asserted only when the current instruction is valid, there is no rollback and there is a miss on the current thread scheduled.

if_cache_miss = stage1_instruction_valid & ~rollback_valid[stage1_thread_scheduled_id] & stage1_miss[stage1_thread_scheduled_id];

The instruction valid is asserted only when the current instruction is valid, there is no rollback and there is not a miss on the current thread scheduled.

if_valid = stage1_instruction_valid & ~rollback_valid[stage1_thread_scheduled_id] & ~stage1_miss[stage1_thread_scheduled_id];

Decode stage

Decode stage decodes a fetched instruction from Instruction Fetch and produces the control signals for the datapath. The output signal dec_instr helps execution and control modules to manage the issued instruction and is propagated in each pipeline stage. Instruction types are described in the ISA section.

The goal of this stage is to fill all the field of the dec_instr signal using the fetched instruction if_inst_scheduled. The if_inst_scheduled signal is composed of an opcode and a body. The instruction could be of 7 different exclusive type:

typedef union packed {
   RR_instruction_body_t RR_body;
   RI_instruction_body_t RI_body;
   MVI_instruction_body_t MVI_body;
   MEM_instruction_body_t MEM_body;
   MPOLI_instruction_body_t MPOLI_body;
   JBA_instruction_body_t JBA_body;
   JRA_instruction_body_t JRA_body;
   CTR_instruction_body_t CTR_body;
} instruction_body_t;

A combinatorial case construct fills the dec_instr fields reading the opcode and other instruction bits. Instruction types are:

   - RR (Register to Register) has a destination register and two source registers.
   - RI (Register Immediate) has a destination register, one source register and an immediate encoded in the instruction word.
   - MVI (Move Immediate) has a destination register and a 16-bit immediate.
   - MEM (Memory Instruction) has a destination/source field, in case of load the first register asses the destination register. Otherwise in case of a store, the first register contains the store value. In both cases, the second source register contains the base address and the immediate is encoded in the instruction. The sum of base address and immediate gives the effective memory address.
   - JBA (Jump Base Address) handles conditional and unconditional jumps, the destination address is calculated on the base of the sum of the source register and the immediate (if present).
   - CTR (Control instructions).

Instruction scheduler stage

The Instruction Scheduler (often referred to as Dynamic Scheduler) schedules in a Round Robin way active threads checking data and structural hazards. A scoreboard for each thread is allocated in this module, whenever an instruction is scheduled, the scoreboard keeps track of which registers are busy setting a bit high in its structure. In this way, if another instruction requires this very register, it raises a WAR or RAW hazard. Dually, when an instruction reaches the Writeback module and writes the computed outcome on its destination register, the relative bit is freed in the scoreboard, from this moment onward the register results free to use.

Fetched instructions are stored in FIFOs in the Instruction buffer stage, one per thread. The Dynamic Scheduler checks data hazard and states which thread can be scheduled to the Operand Fetch.

In this component, FPU structural hazards are checked as well. The FP pipe has one output demux to the Writeback unit. There is no conflict control inside the FP pipe, but two different operations can terminate at the same time and collide in the output propagation. At this stage only data hazards and FPU structural hazards are checked, other structural hazards are checked on-the-fly in the Writeback stage.

The Dynamic Scheduler relies on a light scoreboarding system, one per thread. Based on the scoreboard and on the fetched instruction, the Dynamic Scheduler states which thread is eligible to be scheduled, and a thread is selected through a round-robin arbitrage. A thread is eligible if its can_issue bit is high.

   assign can_issue[thread_id]             = ib_instructions_valid[thread_id] && !( hazard_raw || hazard_waw ) && ( ~( |wb_fifo_full ) ) && ~rb_valid[thread_id] && can_issue_fp && can_issue_spm && release_val[thread_id] && sync_detect[thread_id] && ~dsu_stop_issue[thread_id];


The arbiter evaluates every can_issue bits and, in a round-robin fashion, selects a thread from the eligible pool. Next, the Dynamic Scheduler forwards the instruction selected and the relative thread info to the Operand Fetch, then updates the scoreboard of the issued thread. When the thread is scheduled, the scoreboard_set_bitmap is updated with the destination register, it sets a bit in order to track the destination register used by the current operation. From this moment onward, this register results busy and an instruction which wants to use it raises a data hazard.

   assign hazard_raw                       = |( ( source_mask_0 | source_mask_1 ) & scoreboard );
   assign hazard_waw                       = |( destination_mask & scoreboard );

Dually, the scoreboard_clear_bitmap tracks all registers released by the Writeback stage. An operation in Writeback releases its destination register.

Finally, when a rollback occurs, the Dynamic Scheduler flushes the FIFO of the thread which raised the current rollback, and restores its scoreboard to its previous state.

Operand fetch stage

Operand Fetch builds operands for the Execution Pipeline. As said before, the nu+ core supports SIMD operations, for this purpose it has two register files: a scalar register file (SRF) and a vector register file (VRF). Each register in the SRF register has a size of `REGISTER_SIZE bits (default 32). On the other hand, a register in the VRF register is wider and allocates a scalar register for each hardware lane (`REGISTER_SIZE x `HW_LANE, default 32-bit x 16 HW lane). Both, SRF and VRF, have same registers number (`REGISTER_NUMBER define in nuplus_define.sv, default 64). Both register files are allocated in SRAM memories with two read and a write ports each.

The register file read ports receive as input the instruction scheduled and the data is forwarded to the second stage, which is in charge of building operand 0 and 1. The operand 0, in case of memory access, contains the effective memory address, built by adding the base address and the immediate offset, in each other cases it contains the register value.

if ( next_issue_inst_scheduled.is_memory_access ) 
   opf_fecthed_op0_buff <= {`HW_LANE{rd_out0_scalar + scal_reg_size_t'( next_issue_inst_scheduled.immediate )}};
else
   opf_fecthed_op0_buff <= {`HW_LANE{rd_out0_scalar}};

Immediate values are span over the operand 1. When the current instruction has an immediate, it is replicated on each vector element of operand 1. In each other cases, it contains the effective register value.

if ( next_issue_inst_scheduled.is_source1_immediate )
   opf_fecthed_op1_buff <= {`HW_LANE{next_issue_inst_scheduled.immediate}};
else
   opf_fecthed_op1_buff <= {`HW_LANE{rd_out1_scalar}};

The PC is stored in a special register in the Instruction Fetch, hence it is not handled by the scalar register. Such a register is forwarded in this stage along with the decoded instruction, and used in the operand composition when it is needed.

The register file write port is disposed in order to store output data from the Writeback stage. In case of vectorial operation, the wb_result_hw_lane_mask handles which lane is affected by the current operation.

for ( lane_id = 0; lane_id < `HW_LANE; lane_id ++ ) begin : LANE_WRITE_EN
   assign write_en_byte[lane_id] = wb_result.wb_result_write_byte_enable & {( `BYTE_PER_REGISTER ){wb_result.wb_result_hw_lane_mask[lane_id] & wr_en_vector}};

Each thread has its own register file, this is done by allocating a bigger SRAM (REGISTER_NUMBER x `THREAD_NUMB).

When a masked instruction is issued, the special register `MASK_REG (default scalar register s60) is stored in opf_fecthed_mask. When source 1 is immediate, its value is replied on each vector element. Memory access and branch operation require a base address. In both cases Decode module maps base address in source0.

Integer Arithmetic & Logic unit

This is the main execution stage. It executes jumps, arithmetic and logic operations, it also manages control register accesses, and moves. For further details about the operation admitted, go to the ISA page.

In order to dispatch the current instructionj to the right operator, such a module leverages on the selection fields of the instruction decoded. For example:

assign is_jmpsr = opf_inst_scheduled.pipe_sel == PIPE_BRANCH & opf_inst_scheduled.is_int & opf_inst_scheduled.is_branch;

Vectorial operations are executed in parallel spanning data over all the HW lanes: all lanes perform the same operations, and the final vectorial result vec_result is composed by merging all the scalar results from the different lanes.

On the other hand, when an instruction performs a scalar operation, only the first lane of vec_result contains the useful result. An output logic fetches the result from the right operator based on the issued instruction. For example:

if( is_compare )
   int_result[0] <= cmp_result;
else if (is_shuffle)
   int_result    <= shuffle_result;
else
   int_result    <= vec_result;

Control registers

TODO: aggiungere uncoherent memory map

The Control Register unit holds status and performance information. Some status information are shared among all threads (such as Core ID or the global performance counter), others are thread-specific (such as Thread ID and Thread PC):

   - TILE_ID: shared information, returns current tile identifier.
   - CORE_ID: shared information, returns current core identifier.
   - THREAD_ID: private information, returns thread identifier.
   - GLOBAL_ID: private information, returns Tile ID, Core ID and Thread ID merged together.
   - GCOUNTER_LOW: shared information, returns the low part of the global performance counter.
   - GCOUNTER_HIGH: shared information, returns the high part of the global performance counter.
   - THREAD_EN: shared information, returns the thread-active bitmap mask, one bit per thread.
   - MISS_DATA: shared information, returns the count of data miss occurred so far.
   - MISS_INSTR: shared information, returns the count of instruction miss occurred so far.
   - PC: private information, returns the current value of the PC.
   - TRAP_REASON: private information, returns the trap reason.
   - THREAD_STATUS: private information, returns the current thread status.

The Control Register unit has a direct interface with the Host Interface, in this way the user can access these information on the host-side.

Control registers are handled and accessed (on the core side) by the Integer Arithmetic & Logic unit.

Scratchpad unit

TODO

Per Ciro

This unit is described in the dedicated scratchpad page.

Load/Store unit

This unit is described in the dedicated load/store subsection inside the coherence section.

Floating point unit

TODO

Per Mirko

A multistage floating point instructions, supports all basic FP operation according to the IEEE-754-2008 standard.

Barrier unit

This unit is described in the dedicated synchronization section.

Branch unit

Branch Control unit handles conditional and unconditional jumps and restores scoreboards when a jump is taken. It signals to the Rollback Handler whether a jump has to be taken or not. Base address or condition are stored in opf_fecthed_op0[0], the immediate is stored in opf_fecthed_op1[0].

Nu+ supports two jump instruction formats:

  • JRA: Jump Relative Address is an unconditional jump instruction, it takes an immediate and the core will always jump to PC + immediate location. E.g. jmp -12 -> BC will jump to PC-12 (3 instruction back) memory location.
  • JBA: Jump Base Address can be a conditional or unconditional jump, it takes a register and an immediate as input. In case of a conditional jump, the input register holds the jump condition, if the condition is satisfied BC will jump to PC + immediate location. E.g. branch_eqz s4, -12 -> BC will jump if register s4 is equal zero to PC-12 location.

In case of an unconditional jump, the input register is the effective address where to jump. E.g. jmp s4 -> BC will jump to memory location stored in s4.

Note these two signals:

assign bc_rollback_enable  = jump & opf_inst_scheduled.is_branch & opf_valid;
assign bc_rollback_valid   = opf_valid && opf_inst_scheduled.pipe_sel == PIPE_BRANCH && ~bc_rollback_enable;

The enable is asserted only when a jump is taken. The valid is asserted just if a branch operation is executed, regardless of the result.

Writeback stage

The writeback stage forwards outcoming results from the execution pipes into the register files. The execution pipelines have different lengths and latencies, so instructions issued in different cycles could arrive at the Writeback in the same clock cycle. The Writeback module avoids structural hazards on-the-fly, in fact a load/store or scratchpad memory operations can have variable latency which is not defined at compile or issue time, this can result in an unpredictable structural hazard at this stage.

Writeback Request FIFOs

The Writeback module can resolve collision on itself on-the-fly using a set of EX_PIPES queues: in each queue, the corresponding result is stored, an arbiter selects one result in a round-robin fashion and forwards it into its destination register. Each queue stores all the information needed for a writeback operation, such as destination register and write mask.

Note that a writeback_request_fifo has the almost_full_threashold reduced by 4, this is equal to the distance from the first stage of the operand fetch stage in the worst case. This avoids operation-loss.

Result Composer

Depending on the operation, two tasks have to be done for each execution pipe (if needed):

  • to create a byte-grained register mask, in order to avoid the writing in an undesirable byte (e.g. a load_8 operation writes only in the first byte and not in all the 4-byte register);
  • to compose the final result, moving the bytes in the right positions and executing 8-bit/16-bit and 32-bit load sign extensions, as well.

For example;

assign byte_data_mem_s[j] = {{( `REGISTER_SIZE - 8 ){word_data_mem[j][7]} }, word_data_mem[j][7 : 0]};

Rollback handler

Rollback Handler restores PCs and scoreboards of the thread that issued a rollback. In case of jump or trap, the Brach module in the Execution pipeline issues a rollback request to this stage, and passes to the Rollback Handler the thread ID that issued the rollback, the old scoreboard and the PC to restore.

Furthermore, the Rollback Handler flushes all issued and the queued requests of this thread still in the pipeline. It uses a clear_bitmap for each thread in this way:

  • each time a rollback is issued, the clear_bitmap starts from scratch;
if ( rollback_valid[thread_id] )
   clear_bitmap[thread_id] <= scoreboard_t'( 1'b0 );
  • each time an operation is issued, this is recorder in the scoreboard_temp mask:
scoreboard_temp  = clear_bitmap[thread_id] & ~( scoreboard_clear_int & {`SCOREBOARD_LENGTH{bc_rollback_valid}} ) | ( scoreboard_set_issue & {`SCOREBOARD_LENGTH{is_instruction_valid}} );
  • when a rollback is issued, the scoreboard_temp mask records all the operations issued before the rollback - but the jump operation - the instruction schedule uses this mask to undo all the instruction scheduled after the rollback

Note that all threads are completely independent.

Thread controller

Thread Controller handles the eligible thread pool. This module blocks threads that cannot proceed due cache misses or scoreboarding (hazards). Dually, Thread Controller handles threads wake-up when the blocking conditions are no more trues.

assign tc_if_thread_en = ~wait_thread_mask & thread_en & ~ib_fifo_full;
assign wait_thread_mask_next = ( wait_thread_mask | thread_miss_sleep ) & ~thread_mem_wakeup;
assign thread_mem_wakeup = // 1: thread[i] activated, 0: thread[i] state unmodified 
assign thread_miss_sleep = // 1: thread[i] asleep, 0: thread[i] state unmodified

Note that a load miss blocks the corresponding thread throughput the ib_fifo_full signal until the data is retrieved from the main memory.

Furthermore, the Thread Controller interfaces the Instruction Cache with the main memory, the architecture supports only one level of caching for instructions, in other words when an instruction cache miss occurs the data is retrieved directly from the main memory through the network-on-chip. An instruction miss is handled by a simple 3-state FSM: firstly, it waits for a cache miss request, so it saves all the needed informations and waits a memory response; the last operation is to send a "wake-up" signal to the core and waits again a cache miss request.

Note that if during the memory waiting time (1) another cache miss raises and/or (2) a cache miss raises for the same address from another thread, a specific queue merges those situations: it queues the cache misses and also merges requests to the same instruction address from different threads.

The third task performed is to accept the jobs from host interface and redirect them to the thread controller.

Thread controller