import pandas as pd
import nest_asyncio
nest_asyncio.apply()
from blendsql.models import LlamaCpp
from blendsql import BlendSQL
import blendsql
BlendSQL by Example¶
This notebook introduces BlendSQL, and some of the usecases it can support.
Importantly, the novelty of BlendSQL isn't from the ability to constrain language models according to some regular expression or context-free grammar. We can credit projects like guidance and outlines for that. Instead, the novelty of BlendSQL is its ability to infer these constraints according to the surrounding SQL syntax and closely align generation to the structure of the database.
SQL, as a grammar, has a lot of rules. Just take these SQLite syntax diagrams for example. These rules include things like, IN statement should be followed by a list of items, <, >, should contain numerics, but = could contain any datatype, etc. We can use these to inform language-model functions, which we call 'ingredients', and denote in double curly brackets ({{ and }}).
A Note on Models¶
This demo uses LlamaCpp, and assumes access to a GPU.
If you don't have the ability to run this model, take a look at any of the other model integrations that BlendSQL supports. Importantly, note that only local models via TransformersLLM and LlamaCpp give the type constraints from the Play by the Type Rules paper.
model = LlamaCpp(
filename="google_gemma-3-4b-it-Q6_K.gguf",
model_name_or_path="bartowski/google_gemma-3-4b-it-GGUF",
config={"n_gpu_layers": -1, "n_ctx": 4096, "seed": 100, "n_threads": 16},
)
llama_context: n_ctx_per_seq (512) > n_ctx_train (0) -- possible training context overflow
By default, loading a connection with BlendSQL will create an empty in-memory DuckDB database. As a result, we can use cool DuckDB functions like read_text.
bsql = BlendSQL(
model=model,
)
smoothie = bsql.execute(
"""
SELECT {{
LLMQA(
'Describe BlendSQL in 50 words.',
context=(
SELECT content[0:5000] AS "README"
FROM read_text('https://raw.githubusercontent.com/parkervg/blendsql/main/README.md')
)
)
}} AS answer
"""
)
print(smoothie.df)
┌───────────────────────────────────────────────────────┐ │ answer │ ├───────────────────────────────────────────────────────┤ │ BlendSQL is an open-source SQL tool that combines ... │ └───────────────────────────────────────────────────────┘
We can also set up a local database from a Dict[str, pd.DataFrame] object.
bsql = BlendSQL(
{
"People": pd.DataFrame(
{
'Name': [
'George Washington',
'John Adams',
'President Thomas Jefferson',
'James Madison',
'James Monroe',
'Alexander Hamilton',
'Sabrina Carpenter',
'Charli XCX',
'Elon Musk',
'Michelle Obama',
'Elvis Presley',
],
'Known_For': [
'Established federal government, First U.S. President',
'XYZ Affair, Alien and Sedition Acts',
'Louisiana Purchase, Declaration of Independence',
'War of 1812, Constitution',
'Monroe Doctrine, Missouri Compromise',
'Created national bank, Federalist Papers',
'Nonsense, Emails I Cant Send, Mean Girls musical',
'Crash, How Im Feeling Now, Boom Clap',
'Tesla, SpaceX, Twitter/X acquisition',
'Lets Move campaign, Becoming memoir',
'14 Grammys, King of Rock n Roll'
]
}
),
"Eras": pd.DataFrame(
{
'Years': [
'1800-1900',
'1900-2000',
'2000-Now'
]
}
)
},
# This model can be changed, according to what your personal setup is
model=model
)
# Print the tables in our database
for tablename in bsql.db.tables():
print(tablename)
print(blendsql.common.utils.tabulate(bsql.db.execute_to_df(f"SELECT * FROM {tablename};")))
Eras ┌───────────┐ │ Years │ ├───────────┤ │ 1800-1900 │ │ 1900-2000 │ │ 2000-Now │ └───────────┘ People ┌────────────────────────────┬──────────────────────────────────────────────────────┐ │ Name │ Known_For │ ├────────────────────────────┼──────────────────────────────────────────────────────┤ │ George Washington │ Established federal government, First U.S. President │ │ John Adams │ XYZ Affair, Alien and Sedition Acts │ │ President Thomas Jefferson │ Louisiana Purchase, Declaration of Independence │ │ James Madison │ War of 1812, Constitution │ │ James Monroe │ Monroe Doctrine, Missouri Compromise │ │ Alexander Hamilton │ Created national bank, Federalist Papers │ │ Sabrina Carpenter │ Nonsense, Emails I Cant Send, Mean Girls musical │ │ Charli XCX │ Crash, How Im Feeling Now, Boom Clap │ │ Elon Musk │ Tesla, SpaceX, Twitter/X acquisition │ │ Michelle Obama │ Lets Move campaign, Becoming memoir │ │ Elvis Presley │ 14 Grammys, King of Rock n Roll │ └────────────────────────────┴──────────────────────────────────────────────────────┘
The Elephant in the Room - Aren't LLM Functions in SQL Super Slow?¶
Short answer - compared to nearly all native SQL operations, yes.
However, when using remote APIs like OpenAI or Anthropic, we can dramatically speed up processing times by batching async requests. Below demonstrates that, for a table with 17,686 rows and 1,165 unique values in the column we process, it takes only about 6.5 seconds to run our query with gpt-4o-mini (or about 0.005 seconds per value).
By default, we allow 10 concurrent async requests. Depending on your own quotas set by the API provider, you may be able to increase this number using:
from blendsql import config
# Set the limit for max async calls at a given time below
config.set_async_limit(20)
A Note on Query Optimizations¶
Because LLM functions are relatively slow compared to other SQL functions, when we perform query optimizations behind the scenes, we make sure to execute all native SQL functions before any LLM-based functions. This ensures the language model only receives the smallest set of data it needs to faithfully evaluate a given SQL expression
db = blendsql.db.SQLite(blendsql.utils.fetch_from_hub("california_schools.db"))
print("{} total rows in the table".format(db.execute_to_list("SELECT COUNT(*) FROM schools LIMIT 10;")[0]))
print("{} total unique values in the 'City' column".format(db.execute_to_list("SELECT COUNT(DISTINCT City) FROM schools LIMIT 10;")[0]))
17686 total rows in the table 1165 total unique values in the 'City' column
smoothie = BlendSQL(
db, model=blendsql.models.LiteLLM('openai/gpt-4o-mini', caching=False)
).execute(
"""
SELECT
City,
{{LLMMap('Is this in the Bay Area?', City, options=('t', 'f'))}} AS 'In Bay Area?'
FROM schools;
""",
)
print(f"Finished in {smoothie.meta.process_time_seconds} seconds")
print(blendsql.utils.tabulate(smoothie.df.head(10)))
Executing `SELECT * FROM schools` and setting to `32d0_schools_0`... CREATE TEMP TABLE "32d0_schools_0" ( "CDSCode" TEXT, "NCESDist" TEXT, "NCESSchool" TEXT, "StatusType" TEXT, "County" TEXT, "District" TEXT, "School" TEXT, "Street" TEXT, "StreetAbr" TEXT, "City" TEXT, "Zip" TEXT, "State" TEXT, "MailStreet" TEXT, "MailStrAbr" TEXT, "MailCity" TEXT, "MailZip" TEXT, "MailState" TEXT, "Phone" TEXT, "Ext" TEXT, "Website" TEXT, "OpenDate" TEXT, "ClosedDate" TEXT, "Charter" FLOAT, "CharterNum" TEXT, "FundingType" TEXT, "DOC" TEXT, "DOCType" TEXT, "SOC" TEXT, "SOCType" TEXT, "EdOpsCode" TEXT, "EdOpsName" TEXT, "EILCode" TEXT, "EILName" TEXT, "GSoffered" TEXT, "GSserved" TEXT, "Virtual" TEXT, "Magnet" FLOAT, "Latitude" FLOAT, "Longitude" FLOAT, "AdmFName1" TEXT, "AdmLName1" TEXT, "AdmEmail1" TEXT, "AdmFName2" TEXT, "AdmLName2" TEXT, "AdmEmail2" TEXT, "AdmFName3" TEXT, "AdmLName3" TEXT, "AdmEmail3" TEXT, "LastUpdate" TEXT ) Executing `{{LLMMap('Is this in the Bay Area?', 'schools::City', options='t;f')}}`... Using options '['t', 'f']' Making calls to Model with batch_size 5: | | 234/? [00:00<00:00, 30475.61it/s] LLMMap with OpenaiLLM(gpt-4o-mini) only returned 1165 out of 1166 values Finished LLMMap with values: { "Hayward": true, "Newark": true, "Oakland": true, "Berkeley": true, "San Leandro": true, "-": false, "Dublin": true, "Fremont": false, "Sacramento": true, "Alameda": null } Combining 1 outputs for table `schools` CREATE TEMP TABLE "32d0_schools" ( "CDSCode" TEXT, "NCESDist" TEXT, "NCESSchool" TEXT, "StatusType" TEXT, "County" TEXT, "District" TEXT, "School" TEXT, "Street" TEXT, "StreetAbr" TEXT, "City" TEXT, "Zip" TEXT, "State" TEXT, "MailStreet" TEXT, "MailStrAbr" TEXT, "MailCity" TEXT, "MailZip" TEXT, "MailState" TEXT, "Phone" TEXT, "Ext" TEXT, "Website" TEXT, "OpenDate" TEXT, "ClosedDate" TEXT, "Charter" FLOAT, "CharterNum" TEXT, "FundingType" TEXT, "DOC" TEXT, "DOCType" TEXT, "SOC" TEXT, "SOCType" TEXT, "EdOpsCode" TEXT, "EdOpsName" TEXT, "EILCode" TEXT, "EILName" TEXT, "GSoffered" TEXT, "GSserved" TEXT, "Virtual" TEXT, "Magnet" FLOAT, "Latitude" FLOAT, "Longitude" FLOAT, "AdmFName1" TEXT, "AdmLName1" TEXT, "AdmEmail1" TEXT, "AdmFName2" TEXT, "AdmLName2" TEXT, "AdmEmail2" TEXT, "AdmFName3" TEXT, "AdmLName3" TEXT, "AdmEmail3" TEXT, "LastUpdate" TEXT, "Is this in the Bay Area?" BOOLEAN ) Final Query: SELECT "32d0_schools".City AS City, "32d0_schools"."Is this in the Bay Area?" AS "In Bay Area?" FROM "32d0_schools"
Finished in 6.575707912445068 seconds ┌─────────────┬────────────────┐ │ City │ In Bay Area? │ ├─────────────┼────────────────┤ │ Hayward │ 1 │ │ Newark │ 1 │ │ Oakland │ 1 │ │ Berkeley │ 1 │ │ Oakland │ 1 │ │ Oakland │ 1 │ │ Oakland │ 1 │ │ Hayward │ 1 │ │ San Leandro │ 1 │ │ Hayward │ 1 │ └─────────────┴────────────────┘
Classification with 'LLMMap' and GROUP BY', Constrained by a Column's Values¶
Below, we set up a BlendSQL query leveraging the LLMMap ingredient. This is a unary function similar to the LENGTH or ABS functions in standard SQLite. It takes a single argument (a value from a column) and returns a transformed output, which is then assigned to a new column.
Below, we set up a language-model function which takes in the values from the Name column of the People table, and outputs a value exclusively selected from the Eras.Years column.
smoothie = bsql.execute("""
SELECT GROUP_CONCAT(Name, ', ') AS 'Names',
{{LLMMap('In which time period did the person live?', Name, options=Eras.Years)}} AS "Lived During Classification"
FROM People p
GROUP BY "Lived During Classification"
""")
print(smoothie.df)
┌───────────────────────────────────────────────────────┬───────────────────────────────┐ │ Names │ Lived During Classification │ ├───────────────────────────────────────────────────────┼───────────────────────────────┤ │ George Washington, John Adams, President Thomas Je... │ 1800-1900 │ │ Sabrina Carpenter, Charli XCX, Elon Musk, Michelle... │ 1900-2000 │ └───────────────────────────────────────────────────────┴───────────────────────────────┘
Constrained Decoding - The Presidents Challenge¶
Why does constrained decoding matter? Imagine we want to select all the information we have in our table about the first 3 presidents of the U.S.
In the absence of relevant data stored in our database, we turn to our language model. But one thing thwarts our plans - the language model doesn't know that we've stored the 3rd president's name in our database as 'President Thomas Jefferson', not 'Thomas Jefferson'.
# Setting `infer_gen_constraints=False` - otherwise, this counter-example would work
smoothie = bsql.execute("""
SELECT * FROM People
WHERE People.Name IN {{LLMQA('First 3 presidents of the U.S?', return_type='List[str]')}}
""", infer_gen_constraints=False, verbose=True)
# The final query 'SELECT * FROM People WHERE Name IN ('George Washington','John Adams','Thomas Jefferson')' only yields 2 rows
print(smoothie.df)
Executing `{{LLMQA('First 3 presidents of the U.S?', return_type='List[str]')}}`... Using model cache... Final Query: SELECT * FROM People WHERE People.Name IN ('George Washington', 'John Adams', 'Thomas Jefferson')
┌───────────────────┬───────────────────────────────────────────────────────┐ │ Name │ Known_For │ ├───────────────────┼───────────────────────────────────────────────────────┤ │ George Washington │ Established federal government, First U.S. Preside... │ │ John Adams │ XYZ Affair, Alien and Sedition Acts │ └───────────────────┴───────────────────────────────────────────────────────┘
Constrained decoding comes to our rescue. By specifying infer_gen_constraints=True (which is the default), BlendSQL infers from the surrounding SQL syntax that we expect a value from People.Name, and we force the generation to only select from values present in the Name column - which leads to the expected response.
smoothie = bsql.execute("""
SELECT * FROM People P
WHERE P.Name IN {{LLMQA('First 3 presidents of the U.S?')}}
""", infer_gen_constraints=True, verbose=True)
print(smoothie.df)
Executing `{{LLMQA('First 3 presidents of the U.S?')}}`... Using options '{'Charli XCX', 'President Thomas Jefferson', 'John Adams', 'George Washington', 'Elon Musk', 'James Madison', 'James Monroe', 'Sabrina Carpenter', 'Elvis Presley', 'Michelle Obama', 'Alexander Hamilton'}...' Final Query: SELECT * FROM People AS P WHERE P.Name IN ('George Washington', 'James Madison', 'James Monroe')
┌───────────────────┬───────────────────────────────────────────────────────┐ │ Name │ Known_For │ ├───────────────────┼───────────────────────────────────────────────────────┤ │ George Washington │ Established federal government, First U.S. Preside... │ │ James Madison │ War of 1812, Constitution │ │ James Monroe │ Monroe Doctrine, Missouri Compromise │ └───────────────────┴───────────────────────────────────────────────────────┘
Constrained Decoding - The Alphabet Challenge¶
In BlendSQL, we can utilize the power of constrained decoding to guide a language model's generation towards the structure we expect. In other words, rather than taking a "prompt-and-pray" approach in which we meticulously craft a natural language prompt which (hopefully) generates a list of 3 strings, we can interact with the logit space to ensure this is the case1.
[!NOTE]
These guarantees are only made possible with open models, i.e. where we can access the underlying logits. For closed-models like OpenAI and Anthropic, we rely on prompting (i.e. 'Datatype: List[str]') and make predictions "optimistically"
To demonstrate this, we can use the LLMQA ingredient. This ingredient optionally takes in a table subset as context, and returns either a scalar value or a list of scalars.
Since BlendSQL can infer the shape of a valid generation according to the surrounding SQL syntax, when we use the LLMQA ingredient in a VALUES or IN clause, it will generate a list by default.
smoothie = bsql.execute("""
SELECT * FROM ( VALUES {{LLMQA('What are the first few letters of the alphabet?')}} )
""")
print(smoothie.df)
┌────────┬────────┬────────┐ │ col0 │ col1 │ col2 │ ├────────┼────────┼────────┤ │ A │ B │ C │ └────────┴────────┴────────┘
Ok, so we were able to generate the first letter of the alphabet... what if we want more?
Rather than modify the prompt itself (which can be quite finicky), we can leverage the regex-inspired quantifier argument. This will take either the strings '*' (zero-or-more) or '+' (one-or-more), in addition to tighter bounds of '{3}' (exactly 3) or '{1,6}' (between 1 and 6).
smoothie = bsql.execute("""
SELECT * FROM ( VALUES {{LLMQA('What are the first letters of the alphabet?', quantifier='{5}')}} )
""")
print(smoothie.df)
┌────────┬────────┬────────┬────────┬────────┐ │ col0 │ col1 │ col2 │ col3 │ col4 │ ├────────┼────────┼────────┼────────┼────────┤ │ A │ B │ C │ D │ E │ └────────┴────────┴────────┴────────┴────────┘
What if we want to generate the letters of a different alphabet? We can use the options argument for this, which takes either a reference to another column in the form 'tablename.columnname', or a tuple of strings.
smoothie = bsql.execute("""
SELECT * FROM ( VALUES {{LLMQA('What are the first letters of the alphabet?', options=('α', 'β', 'γ', 'δ'), quantifier='{3}')}} )
""")
print(smoothie.df)
┌────────┬────────┬────────┐ │ col0 │ col1 │ col2 │ ├────────┼────────┼────────┤ │ α │ β │ γ │ └────────┴────────┴────────┘
Agent-Based Inference with CTE Expressions¶
The above example opens up the opportunity to rewrite the query as more of an agent-based flow. SQL is a bit odd in that it's executed bottom-up, i.e. to execute the following query:
SELECT the_answer FROM final_table WHERE final_table.x IN
(SELECT some_field FROM initial_table)
...We first gather some_field from initial_table, and then go and fetch the_answer, despite the author (human or AI) having written the second step, first. This is similar to the point made by Google in the pipe-syntax paper about how SQL syntactic clause order doesn't match semantic evaluation order.
At the end of the day, we have two agents performing the following tasks -
- Brainstorm some greek letters
- Using the output of the previous task, select only the first 3
With BlendSQL, we can use common table expressions (CTEs) to more closely mimic this order of 'agents'.
smoothie = bsql.execute("""
WITH letter_agent_output AS (
SELECT * FROM (VALUES {{LLMQA('List some greek letters')}})
) SELECT {{
LLMQA(
'What is the first letter of the alphabet?',
options=(SELECT * FROM letter_agent_output)
)
}}
""")
print(smoothie.df)
┌──────────┐ │ _col_0 │ ├──────────┤ │ α │ └──────────┘
Using return_type to Influence Generation¶
BlendSQL does its best to infer datatytpes given surrounding syntax. Sometimes, though, the user may want to override those assumptions, or inject new ones that were unable to be inferred.
The return_type argument takes a Python-style type annotation like int, str, bool or float. Below we use that to guide the generation towards one-or-more integer.
smoothie = bsql.execute("""
SELECT * FROM ( VALUES {{LLMQA('Count up, starting from 1', return_type='int', quantifier='+')}} )
""")
print(smoothie.df)
┌────────┬────────┬────────┬────────┬────────┐ │ col0 │ col1 │ col2 │ col3 │ col4 │ ├────────┼────────┼────────┼────────┼────────┤ │ 1 │ 2 │ 3 │ 4 │ 5 │ └────────┴────────┴────────┴────────┴────────┘
RAG for Unstructured Reasoning¶
In addition to using the LLMQA ingredient as a method for generating with tight syntax-aware constraints, we can also relax a bit and let the model give us an unstructured generation for things like summarization.
Also, we can use the context argument to provide relevant table context. This allows us to condition generation on a curated set of data (and do cool stuff with nested reasoning).
# Give a short summary of the person who had a musical by Lin-Manuel Miranda written about them
smoothie = bsql.execute("""
SELECT {{
LLMQA(
'Give me a very short summary of this person',
context=(
SELECT * FROM People
WHERE People.Name = {{
LLMQA('Who has a musical by Lin-Manuel Miranda written about them?')
}}
)
)
}} AS "Summary"
""", verbose=True)
print(smoothie.df)
Executing `{{LLMQA('Give me a very short summary of this person', context=(SELECT * FROM People WHERE People.Name = {{LLMQA('Who has a musical by Lin-Manuel Miranda written about them?')}}))}}`... Executing `{{LLMQA('Who has a musical by Lin-Manuel Miranda written about them?')}}`... Using options '{'Charli XCX', 'President Thomas Jefferson', 'John Adams', 'George Washington', 'Elon Musk', 'James Monroe', 'James Madison', 'Sabrina Carpenter', 'Elvis Presley', 'Michelle Obama', 'Alexander Hamilton'}...' Final Query: SELECT * FROM People WHERE People.Name = 'Alexander Hamilton' Using model cache... Final Query: SELECT 'Founding father and financial leader' AS "Summary"
┌──────────────────────────────────────┐ │ Summary │ ├──────────────────────────────────────┤ │ Founding father and financial leader │ └──────────────────────────────────────┘
# A two-step reasoning problem:
# 1) Identify who, out of the table, is a singer using `LLMMap`
# 2) Where the previous step yields `TRUE`, select the one that wrote the song Espresso.
smoothie = bsql.execute("""
WITH Musicians AS
(
SELECT Name FROM People
WHERE {{LLMMap('Is a singer?', Name)}} = TRUE
)
SELECT Name AS "working late cuz they're a singer" FROM Musicians M
WHERE M.Name = {{LLMQA('Who wrote the song "Espresso?"')}}
""", verbose=True)
print(smoothie.df)
Executing `{{LLMQA('Who wrote the song "Espresso?"')}}`... Materializing CTE `Musicians`... Executing `SELECT People.Name AS Name FROM People WHERE {{LLMMap('Is a singer?', People.Name)}} = TRUE` and setting to `Musicians` Executing `{{LLMMap('Is a singer?', People.Name)}}`... Extracted predicate literals `[True]` Using regex '(t|f|true|false|True|False)' Using model cache... Using model cache... Using model cache... Using model cache... Using model cache... Using model cache... Using model cache... Using model cache... Using model cache... Using model cache... Using model cache...
LLMMap with batch_size=1: 0it [00:00, ?it/s]
Finished LLMMap with values: { "Elon Musk": false, "George Washington": false, "Elvis Presley": true, "James Monroe": false, "Charli XCX": true, "John Adams": false, "James Madison": true, "Alexander Hamilton": true, "Sabrina Carpenter": true, "President Thomas Jefferson": false } Combining 1 outputs for table `People` CREATE OR REPLACE TEMP TABLE "7f1e_People" AS SELECT * FROM df Created temp table 7f1e_People Final Query: SELECT "7f1e_People".Name AS Name FROM "7f1e_People" WHERE "7f1e_People"."Is a singer?" = TRUE CREATE OR REPLACE TEMP TABLE "Musicians" AS SELECT * FROM df Created temp table Musicians Using options '{'Charli XCX', 'James Madison', 'Sabrina Carpenter', 'Elvis Presley', 'Michelle Obama', 'Alexander Hamilton'}...' Using model cache... Final Query: SELECT M.Name AS "working late cuz they're a singer" FROM Musicians AS M WHERE M.Name = 'Charli XCX'
┌─────────────────────────────────────┐ │ working late cuz they're a singer │ ├─────────────────────────────────────┤ │ Charli XCX │ └─────────────────────────────────────┘
Internet-Connected RAG¶
So we know how to use a table subset as a context, by writing subqueries. But what if the knowledge we need to answer a question isn't present in the universe of our table?
For this, we can hook up all of our ingredients with some search functions. Modifying the behavior of the default ingredients can be done by initializing it via from_args() call, and passing the new object in either the bsql.execute(ingredients={new_obj}) call, or the original DB creation in BlendSQL(ingredients={new_obj}).
Let's ask a question that requires a bit more world-knowledge to answer.
smoothie = bsql.execute("""
SELECT * FROM People WHERE Name = {{LLMQA("Who's birthday is June 28, 1971?")}}""")
print(smoothie.df)
┌───────────────┬─────────────────────────────────┐ │ Name │ Known_For │ ├───────────────┼─────────────────────────────────┤ │ Elvis Presley │ 14 Grammys, King of Rock n Roll │ └───────────────┴─────────────────────────────────┘
Not right - Elvis was born in 1935.
Now let's try again, using constrained decoding via options and using the RAGQA ingredient to fetch relevant context via a Tavily web search first.
from blendsql.ingredients import LLMQA
from blendsql.search import TavilySearch
from dotenv import load_dotenv
load_dotenv() # Assumes we have a .env file with a `TAVILY_API_KEY` variable set.
RAGQA = LLMQA.from_args(
searcher=TavilySearch(
k=2, # Retrieve 2 document on each search
),
) # Whatever variable name we use here, we can refer to the function as that in our `execute` call
smoothie = bsql.execute("""
SELECT * FROM People WHERE Name = {{RAGQA("Who's birthday is June 28, 1971?")}}
""", ingredients={RAGQA}, verbose=True)
print(smoothie.df)
Executing `{{RAGQA("Who's birthday is June 28, 1971?")}}`... Retrieved contexts '['Famous People Born on June 28, 1971 - BirthdayDBs....', 'June 28th, 1971 (Monday): Birthday, Zodiac & Weekd...']' Using options '{'Charli XCX', 'James Madison', 'John Quincy Adams', 'Michelle Obama', 'Thomas Jefferson', 'George Washington', 'Sabrina Carpenter', 'Alexander Hamilton', 'Elon Musk', 'Elvis Presley', 'James Monroe'}...' Using model cache... Final Query: SELECT * FROM People WHERE People.Name = 'Elon Musk'
┌───────────┬──────────────────────────────────────┐ │ Name │ Known_For │ ├───────────┼──────────────────────────────────────┤ │ Elon Musk │ Tesla, SpaceX, Twitter/X acquisition │ └───────────┴──────────────────────────────────────┘
Nice! Elon Musk was indeed born on June 28th, 1971. You can check out the BlendSQL logs above to validate this given the web context.