开发者

Development process for an embedded project with significant hardware changes

开发者 https://www.devze.com 2022-12-23 11:19 出处:网络
I have a good idea about Agile development process but I have no ieda how to map that to a embedded project with significant hardware changes.

I have a good idea about Agile development process but I have no ieda how to map that to a embedded project with significant hardware changes.

I will describe below what we are currently doing (Ad-hoc way, no defined process yet). The changes are divided into three categories and different processes are used for each of them:

  1. complete hardware change

    example : use a different video codec IP

    a) Study the new IP

    b) RTL/FPGA simulation

    c) Implement the legacy interface - go to b)

    d) Wait until hardware (tape out) is ready

    f) Test on the real hardware

  2. hardware improvement

    example : enhance the image display quality by improving the underlying algorithm

    a) RTL/FPGA simulation

    b) Wait until hardware and test on the hardware

  3. Minor change

    example : only change hardware register mapping

    a) Wait until hardware and test on the hardware

The worry is it seems we don't have too much control and confidence about software maturity for the hardware changes. This confidence is critical for the project success , as the bring-up schedule is always very tight and the customer desired a seamless change when updating to a new开发者_Python百科 version of hardware.

How did you manage this kind of hardware change? Did you solve that by a Hardware Abstraction Layer (HAL)? Did you have a automatic test for the HAL layer? HAL works for a matural product but it might not works well for a consumer product that change rapidly. How did you test when the hardware platform is not even ready? Do you have well-documented processes for this kind of change?


Adding a Hardware Abstraction Layer (HAL) is a must if you expect the underlying hardware to change during the lifetime of the product. If done correctly, you can create unit tests for both sides of the HAL.

For example, the HAL is simply an API from your GUI to the actual display hardware. You can write a fake hardware driver that doesn't drive a physical device, but responds in different ways to verify that your upper API layers handle all possible response codes from the HAL. Maybe it creates a bitmap in memory (instead of driving external I/O) that you can compare to an expected bitmap to see if it's rendering correctly.

Likewise, you can write a unit test that provides good coverage of the HAL from the upper layers, so you can verify that your new hardware driver is responding correctly. Using the display example, you cycle through all possible screen modes, interface elements, scrolling methods, etc. Unfortunately, for that test you'll need to physically watch the display, but you can potentially run side-by-side with old hardware to see speed improvements or deviations in behavior.

Back to your example, though. How different is switching to another video codec? You're still just pushing bytes around at your upper layers. If you're implementing a known codec, it would be helpful to have source files that act as a unit test (covering a full range of possible data formats) to ensure that your codec decodes and displays them correctly (without crashing!). Decoding to a bitmap in memory makes for a good unit test -- you can just do a memory compare to a raw decompressed frame.

I hope that helps. If not, ask more questions.

0

精彩评论

暂无评论...
验证码 换一张
取 消