Im completing a test design using the gpdk045 and gm/ID methodology for educational purpose. For my technology generation (lookup tables of DC sweeps) I use the gm/ID starter kit scripts of Boris Murmann freely available on the net which I have adapted for the gdpk045. This enables quickly looking at design trade-offs in Matlab. However the results of a simple single transistor simulation between Virtuoso & Matlab are different.
These CDF parameters: as, ad, ps, pd, nrd, .. are calculated using callback functions which depend on the transistor dimensions. How can I use these callback functions to extend my Matlab netlist so I can achieve accurate lookup tables?
I managed to find the kit (would have made it easier if you'd actually given a link - it's here). The simple answer is no - since it runs completely outside Virtuoso, it wouldn't be able to see the CDF callbacks - for that a component in Virtuoso would need to be created.
However, looking at what the code is doing, I don't think as, ad, ps and pd matter (since these just affect the capacitance and so don't affect the dc results) - although they are easy enough to compute given that they are just the area and perimeter of the source/drain regions. The stress parameters sa, sb and sd are unaffected by the width and length of the device - so you can find the fixed values from an experiment in Virtuoso and put them in your generated netlist. The well-proximity effect parameters sca, scb and scc are unaffected by the length (which is what the Matlab code seems to sweep) and are fixed for a specific width. From what I can see, this Matlab code generates data for a given width - so presumably you could just see what values are computed by the callbacks for that width and put them into your generated netlist.
mmdblookup looks up an IP address in the specified MaxMind DB file. Therecord for the IP address is displayed with to denote maps and [] todenote arrays. The values are followed by type annotations. This output isnot JSON and is not intended to be used as such. If you need JSON, pleasesee mmdbinspect.
A stitch in time saves nine is an idiom, used to say that it is better to fix a problem when it is small than to wait and let it become a bigger problem. It has often been surmised that this saying comes from the practice of stitching clothing that needs mending sooner, rather than later. This may indeed be the case, but it is worth noting that our earliest evidence of use, from 1710, appears to not have a direct and literal connection to tailoring.
Trend Watch is a data-driven report on words people are looking up at much higher search rates than normal. While most trends can be traced back to the news or popular culture, our focus is on the lookup data rather than the events themselves.
I have a worksheet which contains a pivot table which has a number of fields set up, for example one of these is called "Name". I need to run a macro which will loop through each of the various values of "Name" (there are about 50) and if that particular value of Name matches a list of unique occurances of Name in a dynanic named range in another sheet in the same workbook then I need to filter by that Name only in the pivot table then create a new workbook and paste special values the filtered extract of that pivot table into a sheet into the new workbook, save it and then close the new workbook, before moving on to the next value of "Name".
I am ok with the copying and pasting into a new workbook part because I already have some code that is working ok for this part. What I am struggling with and would appreciate some help with is the bit where there is a variable list / range of "Names" in a sheet and using that list to iterate through the pivot table, filter the relevant names one by one seperately only if the name matches what is on the range (ignoring any names on the pivot that are not on the range) and then moving on to the next value of Name in the pivot and then checking if it is the range etc etc. I think it should be a kind of for next loop or case statement with an array contained values in the range, being checked against the values in the pivot table Name field but I just can't translate my thoughts into the code.
There are 3 sheets, data, lookup and pivot. The data tab is the source for the pivot table. In the pivot table there are various names, eg Alana, Boris, Cojak etc etc. The lookup tab is a list of names, all the names in that list will always be in the Name column in the pivot tab. But not all Names in the Pivot tab will be in the list of names in the lookup sheet.
What I'd like is to run through the list of names in the lookup tab one by one. Filter tha pivot table by that name, copy the output and paste special values onto a new tab (and save as new file and exit that new file but I can sort that part) then move on to the next name in the list in the lookup tab.
So the first name in the lookup tab is Boris. So filter the pivot table on Name = Boris, copy and paste special values in new tab. (save as new file then exit the new file). Next name in lookup list is Alana. Change filter in pivot table to Alana. Copy and paste special values in new tab. etc etc
In real life there is a lot more data and a lot more columns, names etc but if I can get something that works on this small sample I can adapt it to the real life version. I hope that anyone can help...
I have recently been using cross object formulas (I'll use COF for notation) for the first time, and as far as I can tell the only way to use fields from another object is if the lookup is from the object upon which the formula field is being added.
That's correct. When you create a lookup from A to B you are creating 1:M model or Parent-Child model, which means that object A (Child) can only reference only 1 record of object B (Parent), but record from object B can be referenced from many object A records - parent can have many children.
If you were to create a formula field on the object B (Parent), how would you know which record from the object A (Child) to reference in your formula field when there might be more than 1 children? At this stage the formula fields don't have the functionality to work with children. In order to do that you might need to write an apex code (trigger or class) that will retrieve specific children records (by executing SOQL Query) and then write your own custom logic and work with the records.
Provenance tracking for database operations, i.e., automatically collecting and managing information about the origin of data, has received considerable interest from the database community in the last decade.Efficiently generating and querying provenance is essential for debugging data and queries, evaluating trust measures for data, defining new types of access control models, auditing, and as a supporting technology for data integration and probabilistic databases. The de-facto standard for database provenance is to model provenance as annotations on data and compute the provenance for the outputs of an operation by propagating annotations. Many provenance systems use a relational encoding of provenance annotations. These systems apply query rewrite techniques to transform a query q into a query that propagates input annotations to produce the result of q annotated with provenance. This approach has many advantages. It benefits from existing database technology, e.g., provenance computations are optimized by the database optimizer. Queries over provenance can be expressed as SQL queries over the relational encoding. Alternatively, we can compile a special-purpose provenance query language into SQL queries over such an encoding. In this project we advance the current state-of-the-art in several aspects:
GProM is a database middleware that adds provenance support to multiple database backends. Provenance is information about how data was produced by database operations. That is, for a row in the database or returned by a query we capture from which rows it was derived and by which operations. The system compiles declarative queries with provenance requests into SQL code and executes this SQL code on a backend database system. GProM supports provenance capture for SQL queries and transactions, and produces provenance graphs explaining existing and missing answers for Datalog queries. Provenance is captured on demand by using a compilation technique called instrumentation. Instrumentation rewrites an SQL query (or past transaction) into a query that returns rows paired with their provenance. The output of the instrumentation process is a regular SQL query that can be executed using any standard relational database. The instrumented query generated from a provenance request returns a standard relation that maps rows to their provenance. Provenance for transactions is captured retroactively using a declarative replay technique called reenactment that we have developed at IIT. GProM extends multiple frontend languages (e.g., SQL and Datalog) with provenance requests and can produce code for multiple backends (currently Oracle).The reenactment approach was developed in collaboration with Oracle as part of thethe provenance for temporal databases project.Other noteworthy features of GProM include: support for multipledatabase backends and an optimizer for rewritten queries.
An overview of GProM's architecture is shown above. The user interactswith the system using an extension of one of the supported frontend languages (currently SQL and Datalog). Specifically, we support new language constructs forcapturing and managing provenance (similar to Perm). Incomingstatements are translated into a relational algebra graphrepresentation which we call (1). Similar to intermediate code representations used by compilers, we use relational algebra as an intermediate representation of computation which is independent of the target language. If the statement does not use any provenance features, then the algebra graph is translated back into the declarative language understood by the backend, e.g., we support several native SQL dialects using vendor specific SQL code generators (7).If the input query uses one of the provenance extensions supported by our system, then several instrumentation modules may get involved to serve the user request.
3a8082e126