While the hype around the "Big Data" buzzword has settled down a bit, the amount of data generated during chip design certainly fits the bill. Nowadays, TBs of data get generated during each tapeout, especially at lower process nodes. This presents both challenges and opportunities for chip design teams.
As designs get closer to completion, visibility into metadata such as test run times and the ability to drill down into failures become even more important. In addition, teams need to be able to understand why a test failed, and in some cases, the root cause of the failure should be understood with enough detail to allow further analysis and investigation. This information is typically hidden in some test signature that needs to be unpacked.
Logarithm Labs' product can automate the parsing and extraction of QoS metrics from reports and logs. Designers can use pre-built or customized parsers to push the desired metrics into a central staging area. Our product supports the following capabilities:
Having a centralized data infrastructure for assessing the Quality of IP is critical and certainly worth the investment given the increased complexities of IP design and the deluge of metrics faced by IP design teams today.
With Logarithm Labs' Engineering Automation Platform, verification teams can easily ingest data related to their regression runs and use the data to create reports, dashboards, and visualizations. They can also drill into error signatures if a testbench fails, enabling them to quickly identify the bug and track the design changes that caused the introduction of the bug
With the increased complexity of components and integration of new features, designers have widely adopted verification methodologies that allow them to verify their designs in blocks. This allows them to verify a block before integrating it with other blocks, as well as to verify its integration with other blocks after testing. For example, a block-level regression can be used to verify that a newly integrated block behaves as expected after being integrated with other blocks.
Robust report parsing, metric aggregation, and reporting infrastructure is required for design teams to best utilize the results from their Linting and Formal Verification tools. The focus should be on the capabilities needed to support the data needs of design teams to identify critical bugs and high risk designs, since they end up becoming the bottleneck to advancing to the next stage.
As the complexity of chip design increases, designers have become unable to effectively coordinate across the design team. The Logarithm Labs Engineering Automation Platform gives them the visibility and coordination across the design team to design, verify, and deploy more complex chips faster.
To cope with this growing complexity of chip design, design teams need to be able to automate status tracking and analysis. They need a way to automatically parse logs and reports to collect the appropriate metrics, process and analyze them, and generate reports to help managers understand the “ground truth” of their design and the impact of a design change.
Enabling data-driven decision making for chip designs requires the ability to centralize all the required data and metrics from across your design and flow. This requires tools that can do the grunt work of data collection, analysis, and reporting. In this blog, we highlight the key visibility challenges that make it hard for chip design teams to be more productive.
Design teams are under tremendous pressure to deliver the highest possible quality in the shortest possible time. In this blog, we highlight the primary 3 challenges related to achieving end-to-end visibility.
Logarithm Labs' mission is to bring the Jupyter Notebook ecosystem to the engineering community to radically transform productivity of engineering teams.