RazorSQL is licensed per user. The license is perpetual and does not expire.RazorSQL can be purchased with either 1, 2, or 3 years of included maintenance. Maintenance includes anyupdates / upgrades released during the maintenance window and product related email support. You can continueto use RazorSQL after the maintenance period expires.
RazorSQL Upgrade / Renewal Licenses: For licensed users who are no longer eligible for free upgrades, click the "RazorSQL Upgrade / Renewal Licenses" link toupgrade or renew to the latest version of RazorSQL or to check whether your license is eligible for a free upgrade.
The Buy Online Now link above provides the following payment options: Credit Card, PayPal, Check (US Only).For alternative payment options such as wire transfer, open invoice, etc., please click the following:Alternative Payment Options
RazorSQL is licensed per user.Each licensed user may use RazorSQL on multiple computers and operating systems.License codes purchased via the above link unlock all versions of RazorSQL (Windows, Mac OS X, and Linux / Unix).
After submitting your payment, you will be shown a license code that can be entered into trial versions to convert them to full versions. An email will alsobe sent to the email address you provide FastSpring. This email will contain the license code that can be used to convertthe trial versions to full versions. Please be sure to monitor any bulk mail folders you have as spamfilters sometimes filter legitimate emails.
TL;DR: DuckDB continues to push the boundaries of SQL syntax to both simplify queries and make more advanced analyses possible. Highlights include dynamic column selection, queries that start with the FROM clause, function chaining, and list comprehensions. We boldly go where no SQL engine has gone before! For more details, see the documentation for friendly SQL features.
We believe there are many valid reasons for innovation in the SQL language, among them opportunities to simplify basic queries and also to make more dynamic analyses possible. Many of these features arose from community suggestions! Please let us know your SQL pain points on Discord or GitHub and join us as we change what it feels like to write SQL!
When working with incremental calculated expressions in a select statement, traditional SQL dialects force you to either write out the full expression for each column or create a common table expression (CTE) around each step of the calculation. Now, any column alias can be reused by subsequent columns within the same select statement. Not only that, but these aliases can be used in the where and order by clauses as well.
Databases typically prefer strictness in column definitions and flexibility in the number of rows. This can help by enforcing data types and recording column level metadata. However, in data science workflows and elsewhere, it is very common to dynamically generate columns (for example during feature engineering).
No longer do you need to know all of your column names up front! DuckDB can select and even modify columns based on regular expression pattern matching, EXCLUDE or REPLACE modifiers, and even lambda functions (see the section on lambda functions below for details!).
We can also create a WHERE clause that applies across multiple columns. All columns must match the filter criteria, which is equivalent to combining them with AND. Which episodes had at least 2 warp speed orders and at least a warp speed level of 2?
Individual columns can also be either excluded or replaced prior to applying calculations on them. For example, since our dataset only includes season 1, we do not need to find the MAX of that column. It would be highly illogical.
The REPLACE syntax is also useful when applied to a dynamic set of columns. In this example, we want to convert the dates into timestamps prior to finding the maximum value in each column. Previously this would have required an entire subquery or CTE to pre-process just that single column!
The most flexible way to query a dynamic set of columns is through a lambda function. This allows for any matching criteria to be applied to the names of the columns, not just regular expressions. See more details about lambda functions below.
This has an additional benefit beyond saving keystrokes and staying in a development flow state: autocomplete will have much more context when you begin to choose columns to query. Give the AI a helping hand!
Many SQL blogs advise the use of CTEs instead of subqueries. Among other benefits, they are much more readable. Operations are compartmentalized into discrete chunks and they can be read in order top to bottom instead of forcing the reader to work their way inside out.
DuckDB enables the same interpretability improvement for every scalar function! Use the dot operator to chain functions together, just like in Python. The prior expression in the chain is used as the first argument to the subsequent function.
DuckDB aims to blend the best of databases and dataframes. This new syntax is inspired by the concat function in Pandas. Rather than vertically stacking tables based on column position, columns are matched by name and stacked accordingly. Simply replace UNION with UNION BY NAME or UNION ALL with UNION ALL BY NAME.
Another common situation where column order is strict in SQL is when inserting data into a table. Either the columns must match the order exactly, or all of the column names must be repeated in two locations within the query.
The COLUMNS expression will use all columns except item. After stacking, the column containing the column names from pivoted_purchases should be renamed to year, and the values within those columns represent the count. The result is the same dataset as the original.
Lambdas can also be used to filter down the items in a list. The lambda returns a list of booleans, which is used by the list_filter function to select specific items. The contains function is using the function chaining described earlier.
A struct in DuckDB is a set of key/value pairs. Behind the scenes, a struct is stored with a separate column for each key. As a result, it is computationally easy to explode a struct into separate columns, and now it is also syntactically simple as well! This is another example of allowing SQL to handle dynamic column names.
DuckDB utilizes strong typing to provide high performance and enforce data quality. However, DuckDB is also as forgiving as possible using approaches like implicit casting to avoid always having to cast between data types.
However, if a UNION type is used, each individual row retains its original data type. A UNION is defined using key-value pairs with the key as a name and the value as the data type. This also allows the specific data types to be pulled out as individual columns:
DuckDB takes a nod from the describe function in Pandas and implements a SUMMARIZE keyword that will calculate a variety of statistics about each column in a dataset for a quick, high-level overview. Simply prepend SUMMARIZE to any table or SELECT statement.
DuckDB aims to be the easiest database to use. Fundamental architectural decisions to be in-process, have zero dependencies, and have strong typing contribute to this goal, but the friendliness of its SQL dialect has a strong impact as well. By extending the industry-standard PostgreSQL dialect, DuckDB aims to provide the simplest way to express the data transformations you need. These changes range from altering the ancient clause order of the SELECT statement to begin with FROM, allowing a fundamentally new way to use functions with chaining, to advanced nested data type calculations like list comprehensions. Each of these features are available in the 0.8.1 release.
RazorSQL provides the ability to connect to Amazon DynamoDB databases. It provides visual toolsfor creating and dropping tables, editing table data, and more. RazorSQL also allows users to use SQL syntax to execute SQL selects, inserts, updates, and deletes against DynamoDB databases and has specific syntax to allow the user to force a query or scan operation.
The Amazon DynamoDB database does not natively support SQL. Any SQL statements executed in RazorSQLare translated into DynamoDB specific API calls by RazorSQL. RazorSQL does not support the fullSQL standard for DynamoDB. Listed below are the select, select_query, select_scan, insert, update, and delete SQL syntax supported by RazorSQL.
RazorSQL includes a "select_scan" syntax to force a query to use a scan operation instead of a query operation.Here is an example of the select_scan syntax:select_scan * from CustomerOrders where CustomerId = 'al...@example.com'
RazorSQL supports for scanning for elements inside of maps using the map_column_name.map_attribute_name syntax.For example, if a DynamoDB table has a map column named measurement, and the map has elements named width and height,the following examples show how to scan for the map elements.
Note that if the map elements are character based, they should be wrapped in single quotes. If the map elements arenot character based, they should not be wrapped in single quotes in the query.select * from map_table where measurement.width = 33;select * from map_table where measurement.width = '24-1/8th';RazorSQL supports scanning for nested maps up to five levels deep. To scan for map elements withinanother map, use the following syntax.select * from test_nested_map where outermap.innermap.width = 10;
RazorSQL supports scanning DynamoDB tables using a syntax similar to the AWS Command Line.
The format of the syntax is listed below. The projection expression is optional. Ifthe projection expression is not included, all columns will be returned. There needs tobe an expression-attribute-values= line for each value in the filter expression. Textvalues should be wrapped in single quotes. Numeric values should not be.scantable-name=TableNameprojection-expression=Column1,Column2filter-expression=enter filter expression hereexpression-attribute-values=:value1,'Value 1 Text'expression-attribute-values=:value2,999Here is an example scan that scans the table ProductCatalog returning rows where the Description columncontains the :x value (Red) and the id column is greater than the :y value (10). This scan returnsthe columns Brand and id:scantable-name=ProductCatalogprojection-expression=Brand,idfilter-expression=contains(Description, :x) and id > :yexpression-attribute-values=:x,'Red'expression-attribute-values=:y,10Here is a similar scan, but the projection expression is not included so all columns are returned,and all rows with an id greater than 10 are returned:scantable-name=ProductCatalogfilter-expression=id > :xexpression-attribute-values=:x,10