Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cwe_checker][ERROR]: Timeout or error during cwe_checker execution #3

Open
frakman1 opened this issue Oct 21, 2021 · 15 comments
Open

Comments

@frakman1
Copy link

frakman1 commented Oct 21, 2021

While analyzing a firmware image, I noticed a never ending stream of these error messages regarding cwe_checker.

What is causing this and what can I do to get cwe_checker support to work?

UID: 7b30a06b3f18a4709da562275d698377e8b4e4d23a3e6d959baf2c1727eaf255_5792
 Processed analysis: ['unpacker', 'file_type', 'malware_scanner', 'crypto_material', 'binwalk', 'printable_strings', 'users_and_passwords', 'crypto_hints', 'ip_and_uri_finder', 'software_components', 'file_hashes', 'string_evaluator', 'cpu_architecture', 'qemu_exec', 'source_code_analysis', 'init_systems', 'cve_lookup', 'input_vectors', 'elf_analysis', 'interesting_uris', 'kernel_config', 'file_system_metadata', 'exploit_mitigations', 'cwe_checker', 'known_vulnerabilities', 'tlsh']
 Files included: set()
[pid: 216|app: 0|req: 326/1629] 192.168.86.105 () {34 vars in 668 bytes} [Thu Oct 21 01:29:42 2021] GET /ajax/system_health => generated 2666 bytes in 38 msecs (HTTP/1.1 200) 4 headers in 217 bytes (1 switches on core 0)
[2021-10-21 01:29:43][docker][WARNING]: [Docker]: encountered process error while processing
[2021-10-21 01:29:43][cwe_checker][ERROR]: Timeout or error during cwe_checker execution.
UID: 7ed70be5b656b297c4b91f7de57c70680d7ee684138c04ce144cfec4cf5340c6_6332
[2021-10-21 01:29:43][Analysis][INFO]: Analysis Completed:
UID: 7ed70be5b656b297c4b91f7de57c70680d7ee684138c04ce144cfec4cf5340c6_6332

Under the cwe_checker tab, it says blacklisted file type.

image

@Enkelmann
Copy link

The most common causes for these error messages in the log are corrupted ELF binaries and timeouts.

Corrupted ELF binaries are quite commonly generated if the unpacking algorithm (incorrectly) unpacks parts of a normal ELF binary as a separate binary. As a consequence the partial (and corrupted) ELF file cannot be correctly parsed, which causes the cwe_checker to throw an error, which is then logged by FACT. These types of errors can be safely ignored unless it happens for a file for which you are relatively sure that the ELF file is not corrupted.

Timeouts happen if the analysis of a binary with the cwe_checker runs for more than 10 minutes, in which case the analysis is aborted. The timeout can be adjusted by changing the TIMEOUT_IN_SECONDS variable in the cwe_checker plugin. Large binaries can sometimes result in very long execution times for the plugin, so be careful to not set the timeout too high.

@Enkelmann
Copy link

For the file in your screenshot the file type is identified as data. The cwe_checker plugin only runs on files that are identified as ELF executables. That is why it says blacklisted file type there.

@jstucke
Copy link
Collaborator

jstucke commented Oct 21, 2021

If I might add to that: Each file is recursively unpacked and analyzed by each selected plugin individually. What your screenshot shows is the outermost container/image file of the firmware. As @Enkelmann said, the cwe_checker only runs on ELF files which may be unpacked from the firmware in the process. You can look through the "file tree" for those files but if something were to be found, it would show up on the individual analysis pages of the outer firmware file, because a summary is generated which links back to the individual analysis results of each unpacked file.

@frakman1
Copy link
Author

frakman1 commented Oct 21, 2021

Thanks.
Regarding TIMEOUT_IN_SECONDS : It's definitely not that. It fails immediately as I watch the logs. There's no 10 minute wait.

The screenshot may have had a data file but not a single file produced a cwe_output. Even ELF binaries.
Even files that were identified like busybox etc.

image

Where do I see the actual error it is complaining about? "Timeout or error" is not very helpful when troubleshooting.
I checked the Admin->Logs page and didn't see anything there either.
I checked /var/log/fact/main.logand see that the error associated with busybox (matching uid) when I did a "Run additional analysis" on that file.

[2021-10-21 07:18:47][docker][WARNING]: [Docker]: encountered process error while processing
[2021-10-21 07:18:47][cwe_checker][ERROR]: Timeout or error during cwe_checker execution.
UID: aac1880b2885087b0bb4ac8aef0f3a042d35fdc46c390ebbf103d03b3b24a7f4_349772

@Enkelmann
Copy link

ELF-file, ARM... This looks like an error in the cwe_checker! Could you try to run the cwe_checker directly on some of the binaries? And if the cwe_checker throws an error, open a corresponding issue in the cwe_checker repository? Preferably with an example binary that you can share with us.

@frakman1
Copy link
Author

frakman1 commented Oct 21, 2021

I tried busybox binary but it seemed to take forever, then I tried another file but it also seems stuck with no output.

$ docker run -it  --rm -v $(pwd):/input  --entrypoint /bin/bash  fkiecad/cwe_checker
<install file>
cwe@8896d2383126:/input$ file e2fsck
e2fsck: ELF 32-bit LSB executable, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-armhf.so.3, for GNU/Linux 2.6.16, stripped
cwe@8896d2383126:/input$ /home/cwe/cwe_checker --version
cwe_checker 0.6.0-dev
cwe@8896d2383126:/input$ /home/cwe/cwe_checker e2fsck

e2fsck.zip

@Enkelmann
Copy link

I tried the e2fsk sample with both the stable version of the cwe_checker and with the dev-version.

The stable version, which is used by FACT, did run without problems for me. So I am at a loss what the cause for the error for you is there. Maybe it has to do with the dockerization of FACT itself? Although I am spitballing here...

The 0.6.0-dev version however suffers from the runtime explosion we discussed in the cwe_checker issue. It needed almost 17 minutes on my system but the analysis still managed to finish successfully.

@jstucke
Copy link
Collaborator

jstucke commented Oct 21, 2021

I did some debugging and it turns out that the database paths inside the container in combination with "docker in docker" are to blame for the error: If you don't mount your database from your host inside the container, the paths are wrong when the file is mounted inside the cwe_checker container for analysis (the docker instance outside tries to mount the file but it can't find it, since it is inside the conainer). Therefore, a fix would be to put your database folders in /media and mount that directory inside the container under /media so that the paths line up.
I think I did exactly that when trying out the container to keep my existing database so I didn't see the error. We will try to find an alternative solution or at least document the process in the readme. It is better to not have the database inside the container anyway.

@frakman1
Copy link
Author

frakman1 commented Oct 21, 2021

I left it running overnight and the e2fsck run eventually completed (It spit out a lot of errors too but maybe that's normal)

However, this is when running it manually in its own docker.
When using FACT, it would return immediately with the error log I mentioned above so I get no cwe_checker output for any file.

Can you help me with understanding what/where the database file is exactly and how I should be mounting it to the host (example docker run syntax)? Is there a folder with everything or individual files I need to map? I am not familiar with the layout of this docker container and anytime I do a find/grep for things, I get a long list of "Permission Denier" error messages despite running with sudo.

Usually these details are hidden from the user when doing a docker run because the mount points contain the appropriate folder(s) with all the configuration and database files.

I have dozens of docker based applications in Unraid that utilize this scheme. Using Gitlab as an example again, these folders are mapped as part of docker run and I never have to know or worry about the database contents, or location since each app has its own way of handling its databases

container <-> host
image

UPDATE:
OK, I found it. I looked at /opt/FACT_core/src/config/mongod.conf and see that (as you correctly said in your post) that the /media folder is where the database data is held.

@jstucke
Copy link
Collaborator

jstucke commented Oct 21, 2021

So did you get it to run? I also tried it again and got it to work with an external db the following way:

  • create folder /media/data
  • create folders fact_wt_mongodb, fact_fw_data, fact_auth_data inside /media/data
  • touch /media/data/fact_wt_mongodb/REINITIALIZE_DB
  • chmod 755 -R /media/data (the user inside the container needs read/write access to the folders)
    • alternatively you could try changing owner and group to 999 (that should be user and group id of the fact user inside the container)
  • run the container with -v /media/data:/media/data

@frakman1
Copy link
Author

frakman1 commented Oct 21, 2021

I have had no luck.

What user are you creating the /media folders as? root? or your username? Note that you have to be root to create anything in /. So I'm wondering if you create it as root, then use chown to change it to your user

I'll give this a try and get back to you.

@frakman1
Copy link
Author

I got permission errors when creating the folders normally as my user and using chmod 755.
Error: no write permissions for MongoDB storage path: /media/data/fact_wt_mongodb

However, when I used chown -R 999 /media/data, it worked.
Final working docker run command:

docker run -it --name fact --group-add $(getent group docker | cut -d: -f3) -v /media/data:/media/data -v /var/run/docker.sock:/var/run/docker.sock -v /tmp/fact-docker-tmp:/tmp/fact-docker-tmp -p 0.0.0.0:5000:5000 frakman1/fact:latest start

@frakman1
Copy link
Author

frakman1 commented Oct 21, 2021

I see many other errors in the logs. Is this normal?
binwalk entropy image still doesn't render

[2021-10-21 15:32:03][fail_safe_file_operations][ERROR]: Could not read file: FileNotFoundError [Errno 2] No such file or directory: '/tmp/fact-docker-tmp/fact_analysis_binwalk_z6fyi8gn/1828f1cdceb0576a99be4818a302bb642f213ef68325bd2b136cb5f53bffd76f_55.png'
2021-10-21 15:34:30][docker][WARNING]: [source_code_analysis]: encountered process error while processing
ERROR: Could not connect to clamd on LocalSocket /var/run/clamav/clamd.ctl: No such file or directory

@frakman1
Copy link
Author

I also see these cwe_checker related errors:

[2021-10-21 16:21:15][cwe_checker][ERROR]: cwe_checker execution failed: thread 'main' panicked at 'Error while generating runtime memory image: No loadable segments found', src/caller/src/main.rs:154:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

[2021-10-21 16:45:43][cwe_checker][ERROR]: cwe_checker execution failed: Execution of Ghidra plugin failed: Process was terminated.
ERROR REPORT: Import failed for file: /input (HeadlessAnalyzer)

@Enkelmann
Copy link

The cwe_checker related errors are most likely Linux kernel objects ( .ko files) which the cwe_checker simply cannot handle yet.

And yes, it is normal that the cwe_checker generates a lot of CWE warnings. Improving its analysis quality and thus also reducing the number of false positives in its warnings is part of the ongoing research for the cwe_checker.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants