(If you are an email subscriber and did not see the embedded audio player in my Day 2 report, you will need to go to the-world-is-analog.blogspot.com to listen to the iPDK debate.)
Wednesday - Day 3
1. Meeting with Mephisto Design Automation
Mephisto is another recent entrant into the latest generation of analog optimization vendors. In the last generation there was Neolinear, Antrim, Analog Design Automation, Barcelona... now we have Magma, Mephisto, MunEDA, Orora, Ansyn... There just wasn't time to see them all.
I was interested to check out Mephisto, since they are attempting to commercialize technology developed under Prof. Georges Gielen at KU Leuven in Belgium. Georges was an adviser to us at Antrim, and I enjoyed visiting with his students on several occasions. Unfortunately, after another early morning drive to San Francisco, this meeting never happened as I found myself standing in an empty booth. You've got to get more organized guys!
So, I can only attempt to interpret the product descriptions on Mephisto's website. They claim to be able to do "sizing from scratch", which is a more aggressive claim than most competitors. It is very difficult and inefficient, and often impossible, to start an optimization with a completely un-parameterized circuit. We patented methods to address this at Antrim. It actually appears that their technology for verification is more valuable than the tools for optimization.
Mephisto also claims to have "patent-pending technology" (couldn't find any pending claims on http://www.blogger.com/www.upsto.gov) that "allows designers to capture and solve complex design problems at multiple abstraction levels simultaneously". This is also something we developed at Antrim, to move beyond brute-force SPICE simulation-based optimization algorithms. To avert any potential conflict, I advise checking out the Antrim patents, which now belong to Cadence,
2. Meeting with Tanner EDA
With analog design tools currently splitting into the IPL/OA-based camp (Synopsys, Springsoft, Ciranova, Analog Rails) or Cadence, I was interested to get the perspective of twenty-year old Tanner, which has long been the budget-minded alternative. In fact, Tanner emphasized how much cheaper they are in their booth presentation of an "ROI calculator".
I met with Daniel Hamon, VP and General Manager, who began with a review of Tanner's history as the EDA division of Tanner Research. Tanner EDA has 25,000 installed seats, a significant number compared to what I recall of Cadence's installed base. Their tools are primarily used on Windows-based PCs.
Tanner has not joined the Open Access movement, but Daniel Hamon emphasized the interoperability built into their toolset. Tanner's customer base of analog/RF designers are generally doing much smaller high-performance designs, which require dedicated analog processes rather than nanometer CMOS. Tanner does support TSMC processes at 0.35um-0.18um, and also has design kits for X-fab, Austria Microsystems, and MOSIS.
3. Meeting with Silicon Frontline
Silicon Frontline was formed by a team from Nassda that had developed the advanced post-layout analysis capabilities which set HSIMplus apart form other Fast-SPICE solutions. I know, because I emphasized and promoted that differentiation while I was the product marketing manager. It is a differentiation which Synopsys still gets to enjoy today.
I met with Dermott Lynch, Silicon Frontline's VP of Sales, to discuss the company's new solutions for 3D extraction (which made Gary Smith's "What to See" at DAC list). The company claims to provide near-3D RC extraction accuracy with something approaching 2D performance. Rather than go into a great amount of detail here, I will direct you to an extensive interview with Yuri Feinberg and Andrei Tcherniaev that is available at SCDsource. The cumulative experience of the ex-Nassda team in how to manage large parasitic databases, and to optimize analysis of effects in signal and power nets, does make them a company to watch.
4. Meeting with Ansyn
Ansyn was the 3rd analog optimization company on my agenda, catching my eye for their claim to perform "Analog Circuit Synthesis at System Level". This was the company's 1st time at DAC, after spinning off technology developed at Linköping University in Sweden in 2006. I met with Emil Hjalmarson, PhD and CEO of the new startup.
The discussion with Ansyn turned out to be, totally by chance, the highlight of DAC for me. As I was discussing Ansyn's approach to simulation-based optimization with Emil, I could not see behind me as a gentleman apologized for interrupting as he grabbed some literature from the rack in the corner of the tiny booth. Emil appeared to be especially concerned about who the stranger was, as he asked us if we minded him listening in to our conversation. The stranger then identified himself - "I'm Andrei Vladimirescu", he said. If you know SPICE, you know Prof. Andrei Vladimirescu, since he co-wrote the 1st successful version: SPICE-2. He is also the author of "The SPICE Book".
This made for a very stimulating impromptu conversation as we discussed the pros and cons of integrating an optimizer into the core of a new simulator, as Ansyn has chosen to do. We debated this very same issue when we were developing our product plan at Antrim. Prof. Vladimirescu rightly pointed out that achieving market acceptance and foundry endorsement for a new simulator is a major challenge, and can be an insurmountable obstacle to success. On the other hand, having the optimizer wrapped around an existing commercial simulator means suffering huge performance penalties.
At Antrim, I had argued for the deeply embedded approach, so that we could build a revolutionary system that would be designed not so much for simulation as for optimization, by allowing the "synthesizer" to directly alter the matrix dynamically. Well, R&D would have none of that, and instead we ended up building the worst of both worlds. We had a new proprietary simulator AND, in order to accelerate that development as a stand-alone product, we still wrapped the optimizer around the outside.
For optimization, Ansyn's tool can potentially achieve higher performance than other solutions. They are also taking advantage of mutli-core processing, which has become popular for accelerating traditional SPICE simulators. I don't see any realization at this point to back up the provocative "synthesis" claim, but it will we interesting to see if Ansyn comes away from their 1st DAC visit with a desire to jump into the EDA fray, or stay local in the much smaller European market.
5. Follow-up Discussion with CoWare on modeling and simulation of next generation LTE modems
Following on my introduction to CoWare's LTE modem solution on Tuesday at DAC, I had the opportunity to sit down for a chat with Dr. Johannes Stahl, VP Marketing & Business Development. CoWare has addressed the concept of virtual platforms with a vertical market solution for the 4G mobile wireless market.
As a bit of background; the 4th generation (or 4G) of wireless technology will be characterized by a flat end-to-end internet protocol (IP) network that is optimized for broadband data communications. This is a very significant change as all preceding generations, including today's 3G, are based on (what is at best) a hybrid architecture that was optimized for voice from the outset. A partisan debate between advocates of two similar technologies for the radio access part of 4G networks, mobile WiMAX and LTE, has detracted from the development. As things stand today, mobile WiMAX has been deployed in several U.S. cities by Clearwire/Sprint, while LTE is yet to begin its first field trials. (For much more on this, see my report "The Emerging 4G Wireless Landscape in the U.S.").
With LTE being several years behind WiMAX in regards to SoC development and network deployment, a virtual platform to accelerate software and algorithm development is essential. At a Santa Clara Valley IEEE Communication Society meeting earlier this year, representatives of several WiMAX chip providers emphasized that the algorithms are the critical element to wireless performance, more thanthe silicon. Chip complexity is increasing, as RF front ends are integrated into single-chip SoCs to lower cost, size, and power.
CoWare was very effective in utilizing Twitter to publicize the LTE presentations in their booth:
CoWareAddressing the Design Challenges of ARM-based LTE Mobile Phone Designs Using System Virtualization 10:15am - 12:15pm, Booth 4359The ST-Ericsson "tweet" regarding the integration of a virtual platform with a tester was particularly interesting. My summary assessment is that it was refreshing to see a real application of Electronic System Level design, amongst all the blathering regarding ESL as the "next big thing" for EDA.
9:40 AM Jul 27th from web
ST-Ericsson talk starts at 11:00, using a Virtual Platform to Develop and Validate a UMTS Layer1 Protocol Software Stack 3665 #46DAC
10:23 AM Jul 27th from web
ST-Ericsson: Integrated ANITE wireless tester with Virtual Platform. Complete test bench created virtually. #3665 #46DAC
11:37 AM Jul 27th from web
Motorola: using SystemC simulation for product planning and architecture exploration of state-of-the art 2G/3G/LTE modem chipsets #46DAC
4:19 PM Jul 27th from web
6. PAVILION PANEL: The AMS Revival: Bipolar Thinking?
Chair: Dave Maliniak - Electronic Design, New York, NY
Panelists:
Christoph Grimm - Technische Univ. Wien, Vienna, Austria
Mike Woodward - The MathWorks, Inc., Natick, MA
David Smith - Synopsys, Inc., Hillsboro, OR
This panel produced my favorite quote at DAC, from David Smith: "AMS is the whole world!". Yes, The World Is Analog!
So, if you've followed my blogs for a while, you will know that it is a continuing source of annoyance to see how analog is treated by the "non-cognoscenti". At this year's DAC, we already had analog referred to as "stodgy" and now in need of a "revival" and "bipolar"? Very funny. We all know that only digital types can be truly bi-polar ;-)
The discussion here was intended as a forum on the application of higher-level AMS design languages, such as System-C/AMS. Hence no designers on the panel? I had to stifle a laugh when one of the panelists said that it is "up to management to convince designers" to use top down methodologies. Riiight... let me know how that goes for you.
The thing is, to the extent it exists, analog designers already work top-down. It's just that, like every aspect of analog design, it's hard for the non-cognoscenti to understand because it doesn't look like digital - and it never will. With Mathworks, the provider of MATLAB, on the panel that should have gone without saying. Designers often use MATLAB to do early, high-level architectural analysis. At companies that are large enough to staff CAD and library groups, AMS behavioral modeling is commonly used as well, in Verilog-AMS or VHDL-AMS.
As just about everyone knows, the biggest difference between analog and digital is that there is no automated algorithmic top-down analog synthesis - it's mostly manual. That is unlikely to change anytime soon. However, what the non-cognoscenti also forget or ignore, is that there is plenty of "bottom-up" design that has to occur in digital as well, before you can execute your synthesis flow. Those cell libraries don't just materialize out of nowhere.
Mixed-signal IP is also employed extensively in AMS SoCs, same as in digital. It's just that most analog specifications require tuning and optimization, which is - by the way - why the highest performance digital chips are custom as well.
One of the panelists stated the opinion that new tools & languages are not required, that all you need is methodology and management. Other panels, such as the one following this session, differed with that opinion. Speakers at the Tuesday morning AMS breakfast had already shown some of the limitations of currently available tools for AMS verification, which was correctly identified as the #1 problem in AMS design.
It would be valuable to extend some digital verification concepts to AMS, especially the concept of assertions. I was an early participant in the Accelera Requirements Definition Working Group, where we explored how Verilog-AMS and System Verilog might be integrated to address this need. I am concerned though, that these committees often end up focusing more on defining a language than on defining a solution to the real problems.
7. PANEL: Guess, Solder, Measure, Repeat – How Do I Get My Mixed-Signal Chip Right?
Chair: Ken Kundert - Designer’s Guide Consulting, Los Altos, CA
Panelists:
Georges Gielen - Katholieke Univ. Leuven, Leuven, Belgium
Martin O’Leary - Cadence Design Systems, Inc., San Jose, CA
Eric Grimme - Intel Corp., Hillsboro, OR
Sandeep Tare - Texas Instruments, Inc., Dallas, TX
Warren Wong - Synopsys, Inc., Mountain View, CA
I was happy to hear Prof. Gielen say that they still require their students to learn to solder and measure. Nobody should be granted a EE degree without ever actually building something.
No new ground was covered here, but as my final session at DAC it was a good wrapup on AMS issues.
- In a reversal of digital thinking, Intel described how they must now apply AMS verification to microprocessors. There are many analog phenomena to be accounted for, even in "purely" digital blocks; leakage, power-up/power-down, reliability, noise, variability, etc.
- TI presented some of the same material from the Tuesday AMS breakfast, discussing the complexity of verifying digitally-controlled analog blocks. Interface problems between multiple power domains, power-up/down transient behavior, and incomplete assumptions were some of the problems described.
- Eric from Intel said that they have not yet decided on the use of behavioral models, which require 10X the effort of digital modeling, versus Fast-SPICE. The current preference is for SPICE.
- Prof. Gielen talked about how using models is necessary, but risks leaving out details. He mentioned that academia is working on methods to aid in AMS model creation. Companies such as Lynguent and Orora were showing solutions in this space at DAC, but it is sad to see that we have not come further more than 10 years after we first started working on a solution at Antrim.
- In the Q&A, an engineer from Sony asked how often is AMS IP re-used? Intel said they re-use IP, but it still needs to be validated within the chip? TI also re-uses IP, but quite often it needs to be re-designed.
- Georges talked about how more designs are taking advantage of digital correction to be more resilient.
- In regards to noise & cross-talk: panelists stated that it just can't be simulated at the top level, and you need to rely on designer judgment.
- Intel would like to see analog assertion capability in System Verilog.
-Mike
6 comments:
Mike,
Good summary of analog companies at DAC.
I did visit with Mephisto and wrote this: http://www.chipdesignmag.com/payne/2009/07/28/mephisto-and-analog-design-automation/
Daniel
Thanks Dan,
I inserted the html so readers can just click here for your article on Mephisto:
Dan Payne on Mephisto
-Mike
Hi Mike,
I regret that you did not have a chance to talk to us, but we were fully booked before coming to DAC. On Wednesday morning we organized a breakfast presentation (free to attend) which was announced well before DAC (and also distributed through the DAC mailing list). With the small team present we did everything we could to organize everything smoothly giving priority to (potential) customers and partners over eager distributors, sales persons and consultants. It did happen that some people of our team (including myself) were in meetings at partners' booths and, in parallel, another team member was giving a private demo in our suite (next to our booth). Sorry that you missed us at such a time. BTW: I remember having a phone call with you earlier this year, so feel free to contact me anytime for information additional to our website.
In response to your technical assessment: First of all, customers have done circuit sizing with our tool starting from scratch (with custom parameterization as well as automatic parameterization) but it is not the most common case. You are right about our verification and autodocumentation capabilities: This is our usual entry point at customers yielding immediate productivity improvement and is accepted by even the most conservative analog designers.
We do have a patent pending for our multi-level optimization (provisional Oct 4, 2006, PCT Oct 2, 2007) and we filed the national phase in the US in March 2009. I can give you more details later (as I am now on holidays). Thanks for the patent advise, but we and our patent advisory bureau did a thorough research to make sure we are protecting our IP without conflicting with existing patents.
I read your 3-day coverage and must admit it provides valuable info (thanks!), that is about the events/companies you did visit/meet.
Finally, I'd like to provide some valuable reasons why you DON'T want to have circuit sizing/optimization integrated with a simulator: In short, you want to keep it separated because you do not want to be tied to one simulator with your optimizer (same as with schematic capture: should work to launch simulations with different SPICE simulators). Another reason is that you need to also address higher abstraction levels: MDA is covering this in its M-Design tool where this has been used with Verilog, Matlab, etc. (even custom tools or scripts). Even at the same abstraction level, you sometimes want to combine different tools (e.g. sim A for transient, sim B for noise). BTW: We found it is NOT more efficient to combine optimizer and simulator (I once also thought this would be more efficient). The reason is that the overhead due to the non-integration is only significant if the simulation itself would be extremely short. Unfortunately this is not the case in practice. The only case where a potential benefit would result is when the simulation algorithm time could be drastically reduced due to direct manipulation of the internal states. Again in practice, changing 1 parameter can give another operating region, and now I'm only talking about SPICE, not even higher abstraction levels.
Kind regards,
Kenneth
CEO MDA
Hi Kenneth,
Thanks for your comments, and for sharing your expert insights. I do see all the points you made in favor of outside-the-simulator optimization. We had the same debates at Antrim.
I am surprised though, that you found no significant effect on performance. I would guess that you must be saving the circuit state in some form. Just establishing DC operating point for a re-parameterized circuit can consume a lot of cycles. My thought at the time I worked on this was why waste time re-loading the matrices and going through all the other setup, especially when you get close to a solution that meets the objectives? But, of course, I never got to test out that theory.
Thanks again,
Mike
Hi Mike,
Thanks for visiting our booth and mentioning us in your transcripts from this years DAC. I just wanted to make a note regarding our simulator. AnSyn do provide simulation capabilities, however, we also support the use of 3rd party simulators with our optimization engine. That is, as an AnSyn customer you are free to use EqSim, Eldo, Spectre etc (or any combinations of simulators) to evaluate circuit performance. Typically, the customer will run the final verification in a sign-off simulator, e.g., Spectre and use many EqSim instances to do an accurate and through search of the design space to ensure high performance and guarantee a robust circuit.
We feel that analog designers appreciate the ability to run 20 simulators in parallel, in order to optimize (or analyze) their design under the influence of statistical variations and multiple corners. Although we support the use of high-level models, they can never compete with high-accuracy device-level simulation to achieve a robust solution. And, optimizing a design without considering non-ideal effect, such as mismatch, will, for sure, result in useless solutions.
To us it is clear that the optimizer and evaluation of the cost function (running the simulator) is related, they are not separate entities.
Best regards,
Emil Hjalmarson
CEO, AnSyn
Emile,
Thank you very much for taking the time to provide that additional information. As we discussed at DAC, I totally agree with your notion of a special-purpose simulator used in conjunction with a "golden" reference for final verification.
-Mike
Post a Comment