Below is an example output (I fudged numbers to create the plot so as to not disclose any process information from the PDKs I have access to). Here I have three lines corresponding to three different devices sizes (width or length, irrelevant for this example) and I plot as well as the FoM defined earlier.
Now I’m done. I have characterized my device and never need to run this testbench again (as long as the PDK doesn’t drastically change). You can also run this over temperature and process corners and select a value based on your worst expected corner if you want. So from this, what I do (actually, it’s something my co-worker started to do that I adopted as well) is to create a simple table for a given and its corresponding current density. I choose three values: one to represent weak-inversion, one for moderate,and another for strong. For strong, I pick the value where the starts to curve; the above example is at roughly . The next for moderate will be the middle of the two ‘knees’ on the curve and for strong I choose roughly where the FoM peak occurs. All of these points also correspond the what Dr. Sansen outlines as efficient operating points for power (weak inversion), speed (strong inversion) and an trade-off between speed and power (moderate). All of this information can be found in my previous post. Thus, I end up with the following table:
From this, we can relate to the inversion coefficient via the following equation: With this, we can have a rule of thumb that a large means the device is in weak inversion, a small means the device is in strong inversion, and something in between is in moderate. We never have to break that equation out again so long as we remember it (however, it comes in handy for more complicated design exercises. For example, using the spectral voltage noise density equation in Dr. Binkley’s book in order to bound the acceptable values for IC. We can then back-calculate and find our required current-density that way).
Information in hand, we can proceed to sizing a device. The intuition for where to operate a device can come from a graph that I have modified and reproduced from Dr. Binkley’s MIXDES Conference paper here:
So, for example, say we have a differential input pair for an amplifier we need to size. The things we will care about (for pretty much any amp design) is to minimize mismatch, maximize gain, minimize thermal noise, and maximize bandwidth. Clearly gain and bandwidth directly oppose eachother on this graph: so how to we select our desired operating region? Here I come back to Dr. Sansen’s statement (please see previous post) about operating with an inversion coefficient of ‘1’ is about optimal for most applications: ie. operate in moderate inversion. Depending on our needs, we can always optimize the device further, but this will provide a very good and efficient starting point for our design (which is the whole point of this methodology). So, we look up the required current-density for a value that will place us in moderate inversion: in my example, this is . We have two more requirements: to know our drain current and to know our length. Given that the device is an input pair, we probably don’t want too long of a length since it will increase our area by a lot (though, we do get an improvement in output impedance!) so let’s say it’s . Again, the beauty of this methodology is the fact that this will be a good optimal starting point and we can change it later while still maintaining our desired operating region. For drain current, it really depends on the application as well as current references readily available by the circuit. Let’s say it’s because we need to reduce the noise of our circuit and have a relaxed power budget. Given this, we can calculate our require W/L ratio as: Given we chose a length of , our width is then . Done!
I know some of my statements seem wishy-washy (choice of drain current, for example). The thing is, most of this information is tightly bound in a real design: we have noise targets, the circuit has to fit in a certain area, the circuit can only consume so much power, we have to operate at a certain frequency, etc. This is part of a design: there’s no one-size-fits-all solution. What we can do, however, is use Dr. Binkley’s graph to choose a good starting point for our design. This method is very efficient at approximating an optimal design, allowing you to arrive at an optimal design much more quickly than traditional methods (while drastically limiting the number of calculations you need to perform). It also, crucially, helps to eliminate the trap many designers can fall into where they parameterize their circuits to all hell and run hundreds of simulations to find an optimal point. This is terrible as it never allows a designer to truly understand their circuit (just like using only square-law equations gives you very little insight into a circuits true operation). Dr. Binkley’s graph is great because it allows a designer to familiarize themselves with the correlation between a device’s operating region and common circuit specifications. In addition to this, it allows for a designer to intelligently tweak their design by modifying the operating region for a desired result which means the designer will be forced to know exactly why each device is sized and biased the way it is (whereas in the fully-parameterized-circuit realm, the designer’s answer would be: “I dunno, ‘cause the simulations looked good?”…. tsk, tsk). Basically, this methodology forces a designer to adhere to good design practices and to never place/size/bias a device without understanding what they are trying to accomplish.