Difference between revisions of "Main Page"

From NaplesPU Documentation
Jump to: navigation, search
(Getting started)
 
(54 intermediate revisions by 4 users not shown)
Line 1: Line 1:
TODO: project overview here
+
The ''Naples Processing Unit'', dubbed '''NaplesPU''' or '''NPU''', is a comprehensive open-source manycore accelerator, encompassing all the architecture layers from the compute core up to the on-chip interconnect, the coherence memory hierarchy, and the compilation toolchain.
 +
Entirely written in System Verilog HDL, '''NaplesPU''' exploits the three forms of parallelism that you normally find in modern compute architectures, particularly in heterogeneous accelerators such as GPU devices: vector parallelism, hardware multithreading, and manycore organization. Equipped with a complete LLVM-based compiler targeting the '''NaplesPU''' vector ISA, the '''NPU''' open-source project will let you experiment with all of the flavors of today’s manycore technologies.
 +
 
 +
The '''NPU''' manycore architecture is based on a parameterizable mesh of configurable tiles connected through a Network on Chip (NoC). Each tile has a Cache Controller and a Directory Controller, handling data coherence between different cores in different tiles. The compute core is based on a vector pipeline featuring a lightweight control unit, so as to devote most of the hardware resources to the acceleration of data-parallel kernels. Memory operations and long-latency instructions are masked by exploiting hardware multithreading. Each hardware thread (roughly equivalent to a wavefront in the OpenCL terminology or a CUDA warp in the NVIDIA terminology) has its own PC, register file, and control registers. The number of threads in the '''NaplesPU''' system is user-configurable.
 +
 
 +
[[File:Overview.jpeg|1200px|center|NaplesPU overview]]
 +
 
  
 
== Getting started ==
 
== Getting started ==
This section shows how to approach with nu+ project for simulating or implementing a kernel for nu+ architecture.
+
This section shows how to approach the project for simulating or implementing a kernel for NaplesPU architecture. Kernel means a complex application such as matrix multiplication, transpose of a matrix or similar that is written in a high-level programming language, such as C/C++.
  
 
=== Required software ===
 
=== Required software ===
 
Simulation or implementation of any kernel relies on the following dependencies:
 
Simulation or implementation of any kernel relies on the following dependencies:
 
* Git
 
* Git
* Xilinx Vivado 2018.2 or lower or ModelSim
+
* Xilinx Vivado 2018.2 or ModelSim (e.g. Questa Sim-64 vsim 10.6c_1)
* nu+ toolchain
+
* NaplesPU toolchain
  
 
=== Building process ===
 
=== Building process ===
First step is to obtain the source code of nu+ architecture from official repository by cloning a repository from [https://gitlab.com/vincenscotti/nuplus]
+
The first step is to obtain the source code of NaplesPU architecture from the official repository by cloning it from [https://gitlab.com/vincenscotti/nuplus]
  
 
In Ubuntu Linux environment, this step is fulfilled by starting following command:
 
In Ubuntu Linux environment, this step is fulfilled by starting following command:
  
<code> $ git clone 'https://gitlab.com/vincenscotti/nuplus' </code>
+
<code> $ git clone https://github.com/AlessandroCilardo/NaplesPU </code>
  
In nu+ repository, toolchain consist in a sub-module of repository, so is needed to update. In Ubuntu Linux environment, just type the following command:  
+
In the NaplesPU repository, the toolchain is a git sub-module of the repository so is needed to be created and updated. In Ubuntu Linux environment, just type the following command in a root folder of the repository:  
  
 
<code> $ git submodule update --init </code>
 
<code> $ git submodule update --init </code>
  
Then, third step is to install a toolchain. This process is described [[http://www.naplespu.com/doc/index.php?title=Toolchain here]].
+
Then, the third step is to install a toolchain. This process is described [[http://www.naplespu.com/doc/index.php?title=Toolchain here]].
  
At this point, in a root folder of repository, there are a few sub-folder. Two of this sub-folder are of particular interest for the purpose:
+
=== Simulate a kernel ===
* software, where are stored all kernels (in a sub-folder kernel);
+
The following folders are of particular interest for the purpose:
* tools, where are stored all script for simulate one or more kernel.
+
* software, stores all kernels;
 +
* tools, stores all scripts for simulation.
  
=== Simulate a kernel ===
+
For simulating a kernel there are three ways:
For simulate a kernel there are three way:
 
 
* starting test.sh script
 
* starting test.sh script
* starting setup_project.sh from a root folder of repository, if simulator software chosen is Vivado;
+
* starting setup_project.sh from the root folder of the repository, if the simulator software is Vivado;
* starting simulate.sh from a root folder of repository, if simulator software chosen is ModelSim.
+
* starting simulate.sh from the root folder of the repository, if the simulator software is ModelSim.
  
First of all, is needed to load Vivado or ModelSim function in the shell by source command. This step is mandatory for all ways.
+
First of all, source Vivado or ModelSim in the shell. This step is mandatory for all ways. In Ubuntu Linux environment:
  
==== test.sh script ====
+
<code>$ source Vivado/folder/location/settingXX.sh</code>
TODO: opzioni script da help comando
 
  
This script allows to start one or more kernel defined in a array of script. Kernels will compile and run on nu+ and x86 architecture. Once simulation is terminated, results of both execution are compared by a Python script for verifying correctness of result of nu+ architecture.
+
where XX depends on the installed version of Vivado (32 o 64 bit).
  
In folder tools there is a log file, called cosim.log, where are stored some information about simulation.
+
==== test.sh script ====
 +
The test.sh script, located in the npu/tools folder, runs all the kerels listed in it and compares the output from NPU with the expected result produced by a standard x86 architecture:
  
==== setup_project.sh script ====
+
<code>$ ./test.sh [option]</code>
TODO: opzioni script da help comando
 
  
This script allow to start a kernel specified in a command. Kernel will compile and run on nu+ architecture. Simulation is performed by Vivado.
+
Options are:
 +
* -h,  --help                  show this help
 +
* -t,  --tool=vsim or vivado  specify the tool to use, default: vsim
 +
* -cn, --core-numb=VALUE      specify the core number, default: 1
 +
* -tn, --thread-numb=VALUE    specify the thread number, default: 8
  
==== simulate.sh script ====
+
The test.sh script automatically compiles the kernels and runs them on NaplesPU and x86 architecture. Once the simulation is terminated, for each kernel, the results of both executions are compared by a Python script for verifying the correctness.
TODO: opzioni script da help comando
 
  
This script allow to start a kernel specified in a command. Kernel will compile and run on nu+ architecture. Simulation is performed by ModelSim.
+
In the tools folder, the file cosim.log stores the output of the simulator.
  
=== Implement a kernel ===
+
==== setup_project.sh script ====
TODO: creazione, compilazione
+
The setup_project.sh script can be run as follow from the root of the project:
  
 +
<code>$ tools/vivado/setup_project.sh [option]</code>
  
 +
Options are:
 +
* -h, --help                  show this help
 +
* -k, --kernel=KERNEL_NAME    specify the kernel to use
 +
* -s, --single-core          select the single core configuration, by default the manycore is selected
 +
* -c, --core-mask=VALUE      specify the core activation mask, default: 1
 +
* -t, --thread-mask=VALUE    specify the thread activation mask, default FF
 +
* -m, --mode=gui or batch    specify the tool mode, it can run in either gui or batch mode, default: gui
  
 +
This script starts the kernel specified in the command. The kernel ought be already compiled before running it on the NaplesPU architecture:
  
TODO: clone repository, struttura cartelle
+
tools/vivado/setup_project.sh -k mmsc -c 3 -t $(( 16#F )) -m gui
  
... then you can [[Simulation|simulate]] or [[Implementation|implement]] the system on a supported board...
+
Parameter -c 3 passes the one-hot mask for the core activation: 3 is (11)2, hence tile 0 and 1 will start their cores. Parameter
 +
-t $(( 16#F )) refers to the active thread mask for each core, it is a one-hot mask that states which thread is active in each core: F is (00001111)2 so thread 0 to 3 are running.
 +
Parameter -m gui states in which mode the simulator executes.
  
== Documentation ==
+
==== simulate.sh script ====
 +
The simulate.sh script can be run as follow from the root of the project:
  
[[The nu+ Hardware architecture|The nu+ Hardware Architecture]]
+
<code>$ tools/modelsim/simulate.sh [option]</code>
  
[[toolchain|The nu+ Toolchain]]
+
Options:
 +
* -h, --help                  show this help
 +
* -k, --kernel=KERNEL_NAME    specify the kernel to use
 +
* -s, --single-core          select the single core configuration, by default the manycore is selected
 +
* -c, --core-mask=VALUE      specify the core activation mask, default: 1
 +
* -t, --thread-mask=VALUE    specify the thread activation mask, default FF
 +
* -m, --mode=gui or batch    specify the tool mode, it can run in either gui or batch mode, default: gui
  
[[ISA|The nu+ Instruction Set Architecture]]
+
This script starts the kernel specified in the command. The kernel ought be already compiled before running it on the NaplesPU architecture:
  
[[Emulator|The nu+ Emulator]] TODO: da rimuovere?
+
== Full Documentation ==
  
TODO [[Kernels|Writing nu+ applications]]: spiegazione workflow kernel, scrittura-compilazione-file generati (altrove sara' spiegata la simulazione e il loading su scheda)
+
[[The nu+ Hardware architecture|The NaplesPU Hardware Architecture]]
  
[[applications|Example of nu+ applications]] TODO spostare contenuti in una sezione "Examples" da includere nella pagina di sopra
+
[[toolchain|The NaplesPU Toolchain]]
  
[[Detailed studies|Deeping in nu+ aspects]]
+
[[ISA|The NaplesPU Instruction Set Architecture]]
  
[[nu+ HowTos |nu+ HowTo]]
+
[[Extending nu+|Extending NaplesPU]]
  
[[TileEterogenea | Progetto di Tile Eterogenea]] TODO: spostare altrove
+
[[Heterogeneous Tile|Heterogeneous Tile]]  
  
[[Contributing|Contributing to nu+]] TODO: spostare i contenuti marcati come tali nel wiki
+
[[Programming Model|Programming Model]]
  
 
== Further information on MediaWiki ==
 
== Further information on MediaWiki ==
  
TODO: da rimuovere/salvare altrove
+
The '''NaplesPU''' project documentation will be based on MediaWiki.
 
 
The '''nu+''' project documentation will be based on MediaWiki.
 
 
''For information and guides on using MediaWiki, please see the links below:''
 
''For information and guides on using MediaWiki, please see the links below:''
  

Latest revision as of 15:10, 21 July 2019

The Naples Processing Unit, dubbed NaplesPU or NPU, is a comprehensive open-source manycore accelerator, encompassing all the architecture layers from the compute core up to the on-chip interconnect, the coherence memory hierarchy, and the compilation toolchain. Entirely written in System Verilog HDL, NaplesPU exploits the three forms of parallelism that you normally find in modern compute architectures, particularly in heterogeneous accelerators such as GPU devices: vector parallelism, hardware multithreading, and manycore organization. Equipped with a complete LLVM-based compiler targeting the NaplesPU vector ISA, the NPU open-source project will let you experiment with all of the flavors of today’s manycore technologies.

The NPU manycore architecture is based on a parameterizable mesh of configurable tiles connected through a Network on Chip (NoC). Each tile has a Cache Controller and a Directory Controller, handling data coherence between different cores in different tiles. The compute core is based on a vector pipeline featuring a lightweight control unit, so as to devote most of the hardware resources to the acceleration of data-parallel kernels. Memory operations and long-latency instructions are masked by exploiting hardware multithreading. Each hardware thread (roughly equivalent to a wavefront in the OpenCL terminology or a CUDA warp in the NVIDIA terminology) has its own PC, register file, and control registers. The number of threads in the NaplesPU system is user-configurable.

NaplesPU overview


Getting started

This section shows how to approach the project for simulating or implementing a kernel for NaplesPU architecture. Kernel means a complex application such as matrix multiplication, transpose of a matrix or similar that is written in a high-level programming language, such as C/C++.

Required software

Simulation or implementation of any kernel relies on the following dependencies:

  • Git
  • Xilinx Vivado 2018.2 or ModelSim (e.g. Questa Sim-64 vsim 10.6c_1)
  • NaplesPU toolchain

Building process

The first step is to obtain the source code of NaplesPU architecture from the official repository by cloning it from [1]

In Ubuntu Linux environment, this step is fulfilled by starting following command:

$ git clone https://github.com/AlessandroCilardo/NaplesPU

In the NaplesPU repository, the toolchain is a git sub-module of the repository so is needed to be created and updated. In Ubuntu Linux environment, just type the following command in a root folder of the repository:

$ git submodule update --init

Then, the third step is to install a toolchain. This process is described [here].

Simulate a kernel

The following folders are of particular interest for the purpose:

  • software, stores all kernels;
  • tools, stores all scripts for simulation.

For simulating a kernel there are three ways:

  • starting test.sh script
  • starting setup_project.sh from the root folder of the repository, if the simulator software is Vivado;
  • starting simulate.sh from the root folder of the repository, if the simulator software is ModelSim.

First of all, source Vivado or ModelSim in the shell. This step is mandatory for all ways. In Ubuntu Linux environment:

$ source Vivado/folder/location/settingXX.sh

where XX depends on the installed version of Vivado (32 o 64 bit).

test.sh script

The test.sh script, located in the npu/tools folder, runs all the kerels listed in it and compares the output from NPU with the expected result produced by a standard x86 architecture:

$ ./test.sh [option]

Options are:

  • -h, --help show this help
  • -t, --tool=vsim or vivado specify the tool to use, default: vsim
  • -cn, --core-numb=VALUE specify the core number, default: 1
  • -tn, --thread-numb=VALUE specify the thread number, default: 8

The test.sh script automatically compiles the kernels and runs them on NaplesPU and x86 architecture. Once the simulation is terminated, for each kernel, the results of both executions are compared by a Python script for verifying the correctness.

In the tools folder, the file cosim.log stores the output of the simulator.

setup_project.sh script

The setup_project.sh script can be run as follow from the root of the project:

$ tools/vivado/setup_project.sh [option]

Options are:

  • -h, --help show this help
  • -k, --kernel=KERNEL_NAME specify the kernel to use
  • -s, --single-core select the single core configuration, by default the manycore is selected
  • -c, --core-mask=VALUE specify the core activation mask, default: 1
  • -t, --thread-mask=VALUE specify the thread activation mask, default FF
  • -m, --mode=gui or batch specify the tool mode, it can run in either gui or batch mode, default: gui

This script starts the kernel specified in the command. The kernel ought be already compiled before running it on the NaplesPU architecture:

tools/vivado/setup_project.sh -k mmsc -c 3 -t $(( 16#F )) -m gui

Parameter -c 3 passes the one-hot mask for the core activation: 3 is (11)2, hence tile 0 and 1 will start their cores. Parameter -t $(( 16#F )) refers to the active thread mask for each core, it is a one-hot mask that states which thread is active in each core: F is (00001111)2 so thread 0 to 3 are running. Parameter -m gui states in which mode the simulator executes.

simulate.sh script

The simulate.sh script can be run as follow from the root of the project:

$ tools/modelsim/simulate.sh [option]

Options:

  • -h, --help show this help
  • -k, --kernel=KERNEL_NAME specify the kernel to use
  • -s, --single-core select the single core configuration, by default the manycore is selected
  • -c, --core-mask=VALUE specify the core activation mask, default: 1
  • -t, --thread-mask=VALUE specify the thread activation mask, default FF
  • -m, --mode=gui or batch specify the tool mode, it can run in either gui or batch mode, default: gui

This script starts the kernel specified in the command. The kernel ought be already compiled before running it on the NaplesPU architecture:

Full Documentation

The NaplesPU Hardware Architecture

The NaplesPU Toolchain

The NaplesPU Instruction Set Architecture

Extending NaplesPU

Heterogeneous Tile

Programming Model

Further information on MediaWiki

The NaplesPU project documentation will be based on MediaWiki. For information and guides on using MediaWiki, please see the links below: