Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add devcontainer #576

Open
wants to merge 2 commits into
base: dev
Choose a base branch
from
Open

Add devcontainer #576

wants to merge 2 commits into from

Conversation

trdthg
Copy link
Contributor

@trdthg trdthg commented Dec 10, 2024

Description

resolve #560

  • Dockerfile: install necessary packages, zsh, oh-my-zsh
  • devcontainer.json: add some recommand vscode extensions
  • Makefile: setup ctg,isac,riscof, sail, rv32/64 toolchain

Usage:

Note:

I did not put the setup content into the Dockerfile because we cannot reliably obtain all dependencies through apt install at the moment, and users may want to do some customization (like remove/update/custome things) before installation

  • Need to download a lot of things,mostly toolchains. If your network is poor, the experience may be very bad, so I broke down each step into a seperate target of make
  • Currently we cannot obtain the precompiled binary of sail-model, and we need to manually clone and compile it, I placed them under /workspaces, at the same level as the act repository
  • riscof is installed directly via pip now, but I want to clone it locally as we sometimes need to modify it. (Why not merge riscof into the act repository as well?)

I have tested the entire process locally, and the experience is very good

Copy link

@pbreuer pbreuer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, first of all you need to list the four files in this (patch?) and say what they are and what they are for.
The shell file is a one-liner, does something with git, and therefore I have no idea what it is for or why anyone would want it. Looks like it sets some parameters. As a shell script it is missing the bang line at top (#! /bin/sh). It is missing an author claim so people know who to blame and ask (blah, blah, licence, blah) and the biggie is teh lack of an explanation of what it is/is for. To me it looks dangerous! It mentions "/workspaces" which does not exist on my machine, and is a root directory! Oww!
There is a short json file, again I have no idea what it is for, and it needs to say what it is, who made it, and why. It mentions the shell file so it is presumably some link between something and something ... likely a description of the shell file for some gui, as it mentions an X display number (that's not a number of a display I am working on!)
There is a "docker file" which seems to contain four commands to run in sequence in order to install support infrastructure for building something (what?). Seem to be commands with apt for ubuntu to pull some ordinary build stuff, then a one liner to pull with wget a shell script that maybe is some decorative setup for a zsh running "in" docker, if that has a gui, as I suppose it does.
WHAT is all this for? It seems aimed at a gui, which is something I would not use. The idea in open source is that you engage with source code, not distance yourself from it via a gui!
FINALLY .. there is a Makefile! Hurray. This is the ONLY useful item. You need to describe what it is for, who is the author, etc. It should set parameters at the beginning and follow with rules later. Everything is all mixed up here (aka "unreadable"!).
The first thing it does is set PATH, which is completely unacceptable, because that must be configurable to a users taste. I imagine docker has some fixed paths and these are they, largely.
At this point you should ask yourself what good this is doing you. People can run makefiles on their own! They just type "make"! WHY would anyone need to run it inside docker? Please explain! In the first place one must not install stuff on one's machine that is not under the control of the system's installation and configuration manager, and this is doing just that, apparently. It is entirely unacceptable! Unless ... docker can figure out what the distro is, and build packages for it on the fly, and install them. Can it?
That would be extremely unreliable - its decisions would inevitably conflict with the distro's packagers own, so not likely.
Sigh .. you need to determine what this is FOR. There is no difficulty in downloading stuff from the riscv area and putting in all in some directory in /usr/local/src, for example. The problem then is that all that stuff has been developed by people who seem to have little idea how to code, or how to develop code that is installable and maintainable (two sides of the same coin!). You need to HELP solve that, so you should be doing something that adds the intelligence and knowhow that they have lacked. You need to have a makefile that build what they want to build, yes, but WITHOUT messing up your machine with all these extraneous local extra packages etc. You will say that docker manages its own area, but I at least want no area that is not under the control of the package manager on the machine. The question is how to integrate with that.
The simplest method is to mount a transparant file system over the real file system in a sandbox, build whatever you need with docker in there, install it into /usr/local in the transparant system, then take a tar of the binary installation, move the tar out of the transparant file system and step out of the sandbox, destroy the sandbox and all the docker stuff, and then convert the tar to a distro package using alien or whatever you prefer, then install with the distro package manager. If you like you can do without the transparant file system mount, there are plenty of applications that replace "install" with a script that logs where things go and you can make a list of things to tar in /usr/local from that.
But the problem with that and the similar method via docker that it looks like you are proposing is that you are not adding any intelligence or knowledge, so it's got no content. I don't think you will know what the various things you are installing do or where they put things or why or how they make them. That is the knowledge that one wants added via a makefile or other tool. It should organise all that with purpose and design, and that can't be done without knowing.
But to look at the makefile you have supplied, exeunt the disorganisation, it says it runs some setups for a variety of things. You need to explain what is meant by "setup", and what it does ... I have no idea! Please do explain.
You also need to explain what the things are it does setup for, and allow someone to modify all this intelligently with that information to hand via appropriate comment.
You then generate some sense of what of those things is installed via running "command", which is a Ubuntu-only thing and WILL NOT WORK anywhere else, so it is useless. Didn't you just download all these things via docker anyway? So why are you testing? Or is this stuff that docker didn't download and is about to become a victim of the stuff that was downloaded? I suppose so! You do know that "./configure" is generally used to discover what is available, and that will generate a Makefile set up accordingly? You likely don't want the Makefile itself to do discovery.
Actually, if the things whose presence is tested for aren't available, the Makefile seems to try to run "install" on some things using pip. Well! that's what the instructions say to do on the riscv site! Is the Makefile only intended to save people the trouble of reading?
So far the only thing that has happened is that docker has build pip (I suppose) and pip is now building whatever the various things you want installed are, somehow, and I at least am no wiser as to what is going where or what it is for.
Please put lots of writing in to explain what the users choices are and what the consequences of those choices are.
Actually, as far as I can see, no actual building is done? Just whatever they are are got via curl or pip (I don't know what pip does, but I imagine it gets stuff from remote python repositories). Why is that a help to anyone? I can do that!

What one needs is help in building whatever those things are, and help in choosing whether one wants them or not, and help in configuring them to go in the right places and integrating them into the installed system. And that should be done without adding to the system, or modifying it in any way. Provide explanations that allow for informed choice and leave that to the user.

It doesn't help me. Choose one thing to help install, explain what it does, figure out how it can be built in a standard fashion without whatever weirdism the author has misconceived, and do it. For bonus points record where it put things, build a binary tar out of it and convert to the distro package with alien and install that post-hoc.

How about telling ME what those ref.elf.dump files are supposed to be?

@trdthg
Copy link
Contributor Author

trdthg commented Jan 6, 2025

Thank you very much for your reply. I have never had such a detailed discussion, I need some time to carefully consider these issues

@trdthg
Copy link
Contributor Author

trdthg commented Jan 6, 2025

How about telling ME what those ref.elf.dump files are supposed to be?

Actually I don't quite understand what you mean by ref.elf.dump, how did you get it or where is the doc that mentions it?

if you run riscof coverage you will get

tree riscof-plugins/rv64_cmo/riscof_work/rv64i_m/CMO/src/cbo.zero-01.S
riscof-plugins/rv64_cmo/riscof_work/rv64i_m/CMO/src/cbo.zero-01.S
├── cbo.zero-01.log
├── coverage.rpt
├── ref.cgf
├── ref.disass
├── ref.elf
├── Reference-sail_c_simulator.signature
└── ref.md

if you run riscof run then

tree riscof-plugins/rv64_cmo/riscof_work/rv64i_m/I/src/add-01.S
riscof-plugins/rv64_cmo/riscof_work/rv64i_m/I/src/add-01.S
├── dut
│   ├── DUT-spike.log
│   ├── DUT-spike.signature
│   └── my.elf
└── ref
    ├── add-01.log
    ├── ref.disass
    ├── ref.elf
    └── Reference-sail_c_simulator.signature

@pbreuer
Copy link

pbreuer commented Jan 6, 2025 via email

@allenjbaum
Copy link
Collaborator

allenjbaum commented Jan 6, 2025 via email

@pbreuer
Copy link

pbreuer commented Jan 6, 2025 via email

@allenjbaum
Copy link
Collaborator

allenjbaum commented Jan 6, 2025 via email

@allenjbaum
Copy link
Collaborator

allenjbaum commented Jan 6, 2025 via email

@pbreuer
Copy link

pbreuer commented Jan 7, 2025 via email

@pbreuer
Copy link

pbreuer commented Jan 7, 2025 via email

@jordancarlin
Copy link
Contributor

Hi @pbreuer. I'm one of the main contributors to wally and have done a lot of the work on our tool flow, so hopefully I can shed some light here. I'm not sure who you were talking to from the wally team before, but if you have further questions specific to wally the best way to get in touch with us is by opening an issue or discussion in the cvw repository. We monitor everything opened over there pretty closely.

Starting off with a high level overview, the main purpose of riscof is to run the tests already in the riscv-arch-test repository (this repo). Each test in the repo has a string that describes which RISC-V extensions are necessary for it to run. Riscof takes a configuration file as an input that defines which "plugins" to run the tests on. One of these is designated the "reference model" (usually Sail or Spike) and the other the "DUT" (device under test). Additional configuration files pointed to by this main config file define relevant architectural aspects of the DUT, most importantly which extensions are supported. Riscof then uses this information to determine what subset of the tests should be run on the model (based on the provided list of supported extensions), compiles each of the tests, runs them each on both selected plugins (the reference model and the DUT), and compares the final signature that they dump out. Each plugin has a python file that tells riscof what command to use to compile tests for it and what command to use to run a test on it. The tests are each designed so that all of the architecturally important information that it is testing for ends up getting stored to a particular region of memory (dubbed the "signature region"). At the end of the test, this signature region of memory is dumped to a file (the specific means of doing this dump are plugin dependent and are part of the previously mentioned plugin python files) so that riscof can compare the two.

Riscof is capable of several other things (like measuring coverage), but as previously mentioned, this is more relevant for test development and is not necessary if you are just trying to run the riscv-arch-tests.

In the case of wally specifically, we do things slightly differently. We've found that running the tests through Sail and having riscof do the signature comparison is slower than we would like, especially considering how often we end up running the tests. To get around this we have riscof run with Sail as the reference model and Spike as the DUT. At the end of this it dumps out the signature region from Sail, which should be the officially correct and expected results from the test. We then use our own Makefiles to convert the compiled elf files into hex memfiles that get read into our Verilator simulations by our testbench. At the end of the test, the testbench reads in the expected signature region generated earlier and compares it to the actual state of wally's memory directly in the Verilog of the testbench. This avoids recompiling and regenerating all of the tests and signatures every time we want to run the tests of wally.

The different testing methodology that @allenjbaum was referring to with internal state and the RVVI interface is something that we use for a different set of tests and is not relevant for the riscv-arch-tests.

All of the relevant files for this for wally are in the tests/riscof directory (other than the testbench). The Makefile has the actual riscof commands we use along with all of the flags. The main config file is called config.ini and is in that directory too. There are subdirectories that contain all of the spike plugin and sail plugin specific files.

Hopefully this helps a little and feel free to reach out more here for general riscv-arch-test questions or over on the cvw repo for wally specific questions.

@allenjbaum
Copy link
Collaborator

allenjbaum commented Jan 7, 2025 via email

@pbreuer
Copy link

pbreuer commented Jan 7, 2025 via email

@allenjbaum
Copy link
Collaborator

allenjbaum commented Jan 7, 2025 via email

@jordancarlin
Copy link
Contributor

The trouble I had is that the wally people I talk to seem to know "nothing" about software, not their software or anyone's, so can't tell me anything about what is supposed to be needed, or how it works, or anything. Somebody probably just got it to work, somehow, sometime, and nobody knows now what it was is - it's just a button to press.

Please take a look at my response above for some details on Wally’s use of all of this. If you have further Wally specific questions please ask them on the CVW GitHub repository. Either open an issue or discussion. I don’t see any posts over there from you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create Github Codespace config for beginners
5 participants