Skip to content

Commit

Permalink
muxi_fix
Browse files Browse the repository at this point in the history
  • Loading branch information
shh2000 committed Jan 22, 2024
1 parent 13699a0 commit 4be09ee
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 4 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -442,7 +442,7 @@
</tr>
<tr height="16.80" style='height:16.80pt;'>
<td class="xl65" x:str>8</td>
<td class="xl65" height="33.60" style='height:33.60pt;border-right:none;border-bottom:none;' x:str><a href="https://github.com/FlagOpen/FlagPerf/tree/main/inference/benchmarks/aquila_7b_mmlu" style="text-decoration:none" target="_parent">Aquila-7B-mmlu</td>
<td class="xl65" height="33.60" style='height:33.60pt;border-right:none;border-bottom:none;' x:str><a href="https://github.com/FlagOpen/FlagPerf/tree/main/inference/benchmarks/Aquila_7b_mmlu" style="text-decoration:none" target="_parent">Aquila-7B-mmlu</td>
<td class="xl69" x:str>NLP</td>
<td class="xl69" x:str>fp16</td>
<td class="xl69" x:str>N/A</td>
Expand All @@ -451,7 +451,7 @@
</tr>
<tr height="16.80" style='height:16.80pt;'>
<td class="xl65" x:str>9</td>
<td class="xl65" height="33.60" style='height:33.60pt;border-right:none;border-bottom:none;' x:str><a href="https://github.com/FlagOpen/FlagPerf/tree/main/inference/benchmarks/sam" style="text-decoration:none" target="_parent">SegmentAnything</td>
<td class="xl65" height="33.60" style='height:33.60pt;border-right:none;border-bottom:none;' x:str><a href="https://github.com/FlagOpen/FlagPerf/tree/main/inference/benchmarks/sam_h" style="text-decoration:none" target="_parent">SegmentAnything</td>
<td class="xl69" x:str>MultiModal</td>
<td class="xl69" x:str>fp16</td>
<td class="xl69" x:str>W32A16</td>
Expand Down
4 changes: 2 additions & 2 deletions inference/benchmarks/bertLarge/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,9 +15,9 @@ bert_reference_results_text_md5.txt
* 模型实现
* pytorch:transformers.BertForMaskedLM
* 权重下载
* pytorch:BertForMaskedLM.from_pretrained("bert-large/base-uncased")
* pytorch:BertForMaskedLM.from_pretrained("bert-large-uncased")
* 权重选择
* 使用save_pretrained将加载的bert-large或bert-base权重保存到<data_dir>/<weight_dir>路径下
* 使用save_pretrained将加载的bert-large权重保存到<data_dir>/<weight_dir>路径下

### 3. 软硬件配置与运行信息参考

Expand Down
2 changes: 2 additions & 0 deletions training/nvidia/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ NVIDIA 是人工智能计算领域的领导者。率先采用加速计算,以
在全栈和数据中心级别实现加速计算,英伟达打造的类似于一个计算堆栈或神经网络,其中包含硬件、系统软件、平台软件和应用四层。每一层都对计算机制造商、服务提供商和开发者开放,让他们以更适合的方式集成到其产品当中。
[来源](https://images.nvidia.cn/nvimages/aem-dam/zh_cn/Solutions/about-us/documents/NVIDIA-Story-zhCN.pdf)

此外,如无特殊说明,使用fp32数制的用户代码,在Ampere芯片上将默认以tf32格式进行计算。

# FlagPerf适配验证环境说明
## 环境配置参考
- 硬件
Expand Down

0 comments on commit 4be09ee

Please sign in to comment.