Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request add OEM object under redfish computesystem to fetch metrics for BMC #45

Open
jenningsloy318 opened this issue Nov 1, 2019 · 7 comments

Comments

@jenningsloy318
Copy link

Hi,
request to add OEM object under redfish computesystem, as under oem, I can see that redfish expose all metrics data /redfish/v1/Systems/1/Oem/Lenovo/Metrics/

{
  "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/",
  "[email protected]": 16,
  "@odata.context": "/redfish/v1/$metadata#LenovoHistoryMetricValueContainerCollection.LenovoHistoryMetricValueContainerCollection",
  "Members": [
    {
      "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/SystemInputPower"
    },
    {
      "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/SystemOutputPower"
    },
    {
      "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/CPUSubsystemPower"
    },
    {
      "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/MemorySubsystemPower"
    },
    {
      "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/PowerSupplyInputPower1"
    },
    {
      "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/PowerSupplyInputPower2"
    },
    {
      "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/PowerSupplyOutputPower1"
    },
    {
      "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/PowerSupplyOutputPower2"
    },
    {
      "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/SystemPerformance"
    },
    {
      "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/CPUSubsystemPerformance"
    },
    {
      "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/MemorySubsystemPerformance"
    },
    {
      "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/IOSubsystemPerformance"
    },
    {
      "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/InletAirTemp"
    },
    {
      "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/OutletAirTemp"
    },
    {
      "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/ExpansionInletAirTemp"
    },
    {
      "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/ExpansionOutletAirTemp"
    }
  ],
  "@odata.type": "#LenovoHistoryMetricValueContainerCollection.LenovoHistoryMetricValueContainerCollection",
  "[email protected]": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/Members",
  "@odata.etag": "W/\"3782fe24455209818097e08bdd8f76d5\"",
  "Name": "Historical Metrics",
  "Description": "A Collection of HistoryMetricValueContainer resource instances."
}

we can get the merics root uri via

curl -u user:pass https://10.36.48.23/redfish/v1/Systems/1/|  jq .Oem[].Metrics
{
  "@odata.id": "/redfish/v1/Systems/1/Oem/Lenovo/Metrics/"
}
@stmcginnis
Copy link
Owner

Sorry it has taken me so long to respond to this, it's been a busy travel month.

We do need to get OEM support added. I haven't had a chance to focus on this long enough to figure out how that can work well in Go. I don't have an answer yet, but I hope to set aside some time soon to sort this out.

@superchalupa
Copy link

Random suggestion for the API:

move the .rawData structure member up into the Entity object and have the raw entity object store the raw and have most of the apis for load/store. Then have the higher level objects use Entity apis to unmarshal into the subclassed struct.

Then, in the Entity, you can add helper apis to walk into redfish properties that are not part of the exported go structure.

@stmcginnis
Copy link
Owner

Thanks @superchalupa! We've actually sorted out a way to approach this here: #146

Please take a look. Thanks!

@superchalupa
Copy link

I will take a look. I had a colleague reach out for help with a problem accessing Oem resources. I'm not familiar with this project so I took a quick look at the current code and didnt see a good solution. Looked at the issue list and saw this one that looked like it most matched what we were having issues with.

@stmcginnis
Copy link
Owner

Sounds good, thanks @superchalupa!

@superchalupa
Copy link

Not sure where the best place to discuss this is, and I'm not actively using this project so have no actual stake here, except that I maintain redfish server side stacks across a couple products. Personally, when I do redfish client stuff, I just operate directly on the JSON and dont bother much with wrapping everything up in structs. Something like decoding to a map[string]interface{} usually works for me. It's not the friendliest thing in the world, but it gets you there, especially if you have a couple helper functions for starting sessions/logging in.

My observation is that the API should make it easier to access "outside of spec" data in a redfish response a bit easier. It seems somewhat difficult/arduous for somebody to use this codebase to access some random OEM or "non-compliant" property in a response without jumping through a lot of hoops to get there. Some way to traverse non-struct-accessible paths would be nice.

Next, you might consider writing some 'go generate' compatible methods of code generation based off a user-provided openapi.yaml (or equivalent). I've seen at least two projects that were using the openapi tools to do go codegen, mainly just focusing on the structs. I see you have some python templating code, possibly doing dynamic code gen (compatible with go generate) in the user's project based on that could be helpful.

I havent personally thought through the API client side to have a solid opinion on the ideal way of doing it, but I do know that it needs to be flexible enough to gracefully deal with things outside the spec, vendor differences, and silly things like annotations, embedded messages, and other things the spec allows.

The other 'danger' I see is maintaining all this data in-process. In my products, the Logs/ urls are auto-expanded and we support several thousand log entries. Somebody who uses this api to parse logs might be in for a surprise about how much memory that uses. One potential solution is to use pagination ($top/$skip) or filtering ($filter). Another solution is potentially doing streaming and only capturing the data the user needs.

@stmcginnis
Copy link
Owner

I'll keep this open and see if I or someone else has an opportunity to explore some of these suggestions. Thanks for that write up - that's very useful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants