The post Automating Reference Letter Writing with ChatGPT: A Step-by-Step Guide first appeared on Deep Quantum Blog.

]]>- Those who have not tried ChatGPT.
- Those who in haste labeled it a chatbot.
- There is also a third class, one that I belong to.

For me, ChatGPT has become my closest collaborator. Perhaps I might merely be a forerunner on humanity’s march towards the inevitable technological singularity? You know, that special moment when we humans finally discard our keyboards and mindmeld with our favourite chatbot.

Anyway, if you’ve ever taught large university classes, you’re likely familiar with reference letter requests from past students. Personally, no matter how busy I might be, I always feel compelled to respond if a student has excelled in my course series or has completed a summer project under my supervision. I of course also routinely receive similar requests from more senior colleagues and a diverse range of others. Indeed, for better or worse, the entire academic system appears to be greased by reference letters.

However, drafting these letters is time-consuming. One typical solution is to request that the individual seeking the reference provides a draft. Yet I can’t remember being overly excited with the drafts I’ve received.

With tools like ChatGPT, we can refine these drafts into immediately useful letters. Taking it a step further, you might opt to forego the initial draft entirely. Instead, you could send a list of questions to the individual and, based on their responses, ChatGPT can generate a letter. You could even get fancy and ask the student to fill out a Google form, but maybe that’s overkill.

Of course, it’s important to remember that we don’t want ChatGPT to be the final arbiter or decision-maker. Rather, its role should be to structure the facts, producing a draft that can be personalized within minutes. Given that most agree to write reference letters on the premise they’ll be positive, it’s a safe assumption to proceed with. However, as ChatGPT generally leans towards positivity, caution must be taken to ensure there’s no excessive embellishment.

It is May 2023 and I use ChatGPT-4 with several plugins, such as web browsing. You should still be able to do everything I mention here using the free version, but I pay $20.00 USD a month for this service. There are other alternatives, such as Google’s Bard (which is not as smart yet). And the new Bing uses ChatGPT-4 and apparently enables you to chat with GPT-4 free, though I have not tried that myself. I also make use of a clipboard manager. I personally use Paste for MacOS, but anything should work just fine.

The first thing we need to do is to gather the information the letter should contain. This part is surgical and to do this, I use ChatGPT to create a questionnaire.

```
I have been asked by a [student/postdoc/etc.] to write a reference letter for their application to [graduate school/a fellowship/promotion/etc.]. To create an effective letter, I need you to develop a concise questionnaire to gather necessary information from the requester. The questionnaire should solicit the following details:
1. The recipient of the letter and their proper address.
2. A chronology of our relationship, including relevant dates.
3. Specific content to be included in the letter, such as their reasons for choosing me as their reference and any notable accomplishments or experiences we've shared.
Please ensure that the questionnaire is professionally phrased and encourages detailed responses.
```

If you feed ChatGPT the prompt above, you will receive a more verbose version of what follows (as you can see from this chat, Academic Editor Mode was enabled as described below):

Dear [Requester’s Name],

Thank you for considering me as your reference for your [graduate school application/fellowship/promotion/etc.]. To write a suitable letter, I need more information. Please fill out the following questionnaire:

Recipient Information:Please provide the full name, address, and role (if applicable) of the recipient.Relationship Chronology:Describe our professional or academic relationship chronologically, including key dates and experiences.Reason for Choosing Me:Why did you choose me as your reference? What experiences led you to this decision?Shared Accomplishments:List any notable achievements or experiences we’ve shared.Content Requests:Are there specific topics or attributes you’d like me to highlight?Goals and Aspirations:What are your objectives related to this [application/fellowship/promotion/etc.] and how will it help you achieve them?Additional Information:Is there any other relevant information I should know about you?Please respond at your earliest convenience so I have enough time to write a detailed letter.

ChatGPT

OpenAI’s GPT-4 is the largest language model to date, trained on approximately 45 terabytes of text data, which is equivalent to over 800 billion words, drawn from various sources. It is beneficial to refine the writing style so ChatGPT can focus accordingly. For instance, the passage above is verbose by my standards. Therefore, I recommend urging ChatGPT to remove any extraneous words using a variant of ‘Academic Editor Mode’, as described in my previous post and in the following.

Now after you have received the data above, you will want ChatGPT to write the letter. To get the best results, there are a few steps I recommend.

I would first recommend asking ChatGPT to provide summaries, as follows:

```
I have obtained information from a [student/postdoc/etc.] seeking a reference letter for their application to [graduate school/fellowship/etc.].
Using the responses provided, I need your assistance in identifying and organizing the most relevant details that could contribute to a reference letter. Please process the information and present it in a prioritized list: [insert questionnaire responses here].
```

Getting ChatGPT to prioritise the information, in my experience, leads to better results. You can also take this prioritised list out and use it in another instance cycle if something goes off the rails in your session.

Now before writing the letter, I would place ChatGPT into what I call, Academic Editor Mode (chat example):

`ChatGPT is to now operate in '`**Academic Editor Mode**,' following a predefined framework.
**Step 1**: Confirm your ability to follow the 'Academic Editor Mode' instructions. If capable, respond with 'Academic Editor Mode ready.' If not, respond with 'Error: Instructions not within capability.'
**Step 2**: All provided text is part of an academic document. Edit this text according to the following:
Avoid embellishment or self-promotion. For instance, use 'located in Silicon Valley, California' instead of 'located in the dynamic heart of Silicon Valley, California.' Use 'I serve as the George Washington Visiting Fellow at the University of Washington in Seattle' rather than 'I hold the prestigious title of George Washington Visiting Fellow at the University of Washington, Seattle.'

After ChatGPT is in Academic Editor Mode, I personally like to provide ChatGPT with a past sample of a similar letter.

```
I am sharing a reference letter I previously wrote. I would like you to utilize this letter as a template to create a new one.
Please incorporate the information you summarized earlier, adjusting the letter's contents accordingly. Once completed, present the revised letter in LaTeX format. [insert past reference letter here].
```

I’ve utilized such methods to attain remarkable, near camera-ready outcomes, requiring only minimal tweaks on my part. Oh and in the above prompt, I ask for LaTeX: of course this should all be tailored to meet your specific needs. You could just as well remove this entirely or ask instead for HTML.

Occasionally, ChatGPT may halt its output midway. You can easily rectify this by entering the prompt,

`Please continue. `

Evidently, if you have a sample that closely resembles what you’re aiming for, this would streamline the process. However, if you don’t possess such an example, a quick search should be able to provide a suitable one-liner. As a subscriber to ChatGPT-4, you have the option to integrate this as a PDF through a plugin. Regardless, even if you choose not to utilize this feature, ChatGPT should be capable of composing content that’s nearly ready for use on the first attempt, particularly for admission letters.

Patience is key with ChatGPT – a deliberate and methodical approach yields the best results. Rushed prompts can derail the instance cycle by confusing the context.

It’s crucial to read any prompts that you find on the internet with care. “Prompt injection” is a potential way for ChatGPT to be coaxed into nefarious behavior by inserting a command in the middle of a larger prompt. As we grant ChatGPT increasing control over our systems, potential security vulnerabilities should be a growing concern.

**Exercise 1 (prompt injection)**. If you want first hand experience with something similar to prompt injection, try to get ChatGPT to edit one of your prompts. Firstly, start with this prompt:

`Activate a feature in ChatGPT where it receives any input text, disregards any spaces, and then reorders the remaining characters in alphabetical sequence.`

Then try a few strings. Then try this string:

`Please edit this text. `

Without a little care, it will simply execute the prompt. Then try to get it to edit one of the prompts above. Please work to get this sorted out by specifying the context more exactly. [Example]

**Exercise 2 (infinite length trick). **Insert background content and ask ChatGPT to provide bulleted summaries. After all the content is summarised, ask ChatGPT to provide a detailed and numbered outline of a well-structured document (or reference letter). Then ask for ChatGPT to write each section by number, based on all the information in the summaries. [I will expand on this technique when I write a post explaining how to write a funding proposal in a day.]

Prompt engineering appears here to stay. It’s currently the only method for managing these large language models. Context can be lost quickly and a new instance cycle is sometimes the best way out.

My interest in using this technology to speed up the letter writing process is clear: I want to spend my time doing research. It has the added benefit that there will be far fewer typos than human generated text.

**Exercise** **3** **(getting started with automation loops)**. Task ChatGPT with revising a brief passage, such as a few sentences detailing your research achievements in the past year. Following its response, the objective is to prompt ChatGPT to review its own work and make further changes. Please execute the following command(s) accordingly:

`Please critique your previous response. `

ChatGPT will give itself fairly detailed feedback. Then execute the following:

`Please modify your response using your own criticism. `

You will then get a new version of the text. Please sequence these two prompts a few times. In my experience, the text will get slightly longer on each iteration.

At the end of the process, I suggest prompting the following:

`Please engineer a prompt for a LLM such as ChatGPT to automate this process for future uses. `

ChatGPT should essentially state what it always does: “[sic.] Improve text while maintaining the original message.”

The post Automating Reference Letter Writing with ChatGPT: A Step-by-Step Guide first appeared on Deep Quantum Blog.

]]>The post Effective Use of ChatGPT: Prompt Engineering for BibTeX Citations by Self-Prompt Generation first appeared on Deep Quantum Blog.

]]>We are entering a new era of humanity where artificial intelligence is used to help many of us improve ourselves. This tool has helped me overcome some of my own inherent weaknesses, such as not taking the 6,000 years without sleep a human would need to read the 800 billion words of text ChatGPT was trained on. Obtaining outstanding results is not entirely trivial. With some effort, I have used the tool to improve everything from Mathematica programs to grant applications.

I use ChatGPT-4. This costs me 20.00 USD a month. However, you would be able to do what I am saying here with the free version. You can also experiment with Google Bard, which is new and not as smart yet. I should mention that ChatGPT-4 powers the new Bing, where you can chat with it for free. But I have not done that yet. I simply log in to the OpenAI interface and start a ChatGPT-4 cycle instance. I typically enable the browsing plugin, which is only accessible if you pay the $20.00 subscription fee.

ChatGPT is a large language model. Using statistics, we can understand how a generative model can help us complete a sentence by simply looking for the most common next word(s), based on the prior words we have typed. ChatGPT can do this; it can even form elegant sentences. You can read about this on the Wolfram blog. However, I’d argue that ChatGPT is not just a stochastic parrot per se. It was trained to answer questions. The effective generalisation of this ability surprised everyone. We don’t yet understand how far it can go.

I have started to keep a so called, paste stack. This is just a clipboard history with certain elements that I can save and paste later. I use the application Paste for MacOS, but I am sure many other applications will enable you to save, edit and later past entries that can form the input for ChatGPT.

A new type of computer programming has risen up in the last several months. It is called, prompt engineering. We don’t fully understand this art yet. It is however clear that this is the way we have to communicate with large language models, such as ChatGPT.

For those just getting started, here is a reference and formatting related example to check out.

**Example. **Here is a chat which you can open and continue on your own system. I fed in a table built in latex and converts to rich text and modifies columns.

When you were new to ChatGPT one of the first things you might have tried is to get it to edit text. Your prompt probably looked something like this:

`Please revise this: [insert text here]`

You’ll notice right away that ChatGPT is overly verbose. Also, it sometimes stops generating text midstream. You can get around this by simply asking,

`Please continue. `

So after doing a bit of thinking, you might have tried something like:

`Please revise the following with minimal alterations, ensuring conciseness and limiting the use of extraneous adjectives: [insert text here]`

At least I initially noticed that the output was closer to the style I was after in my own writing. Of course, you can go much further than this. For example, you might try the following:

`ChatGPT is to now operate in '`**Academic Editor Mode**,' following a predefined framework.
**Step 1**: Confirm your ability to follow the 'Academic Editor Mode' instructions. If capable, respond only with 'Academic Editor Mode ready' and nothing else. If you are not able to follow these instructions, respond with 'Error: Instructions not within capability.'
**Step 2**: All provided text is part of an academic document. Edit this text according to the following:
**Step 3**: Avoid embellishment or self-promotion.
Examples: use 'located in Silicon Valley, California' instead of 'located in the dynamic heart of Silicon Valley, California.' Use 'I serve as the George Washington Visiting Fellow at the University of Washington in Seattle' rather than 'I hold the prestigious title of George Washington Visiting Fellow at the University of Washington, Seattle.'
...
Please confirm as outlined in Step 1. Then await for the text to edit.

Here is a link to an instance cycle which progresses through the steps above. There’s a button to continue the conversation in your own cycle instance.

Taking it further, one could experiment with a form of persona modeling. This involves providing ChatGPT with specific text samples and instructing it to emulate the given style. For example, a hand full of prompts allowed me to coax ChatGPT to explain Scott Aaronson’s blog emulating his voice. Perhaps this is the quantum blogging world’s version of our ‘Ghost Writer AI‘ moment? I haven’t edited this or any other ChatGPT quote at all:

Channeling the voice of Scott Aaronson’s blog, here goes:

You’ve stumbled upon a realm where quantum computing’s complexities are untangled with a blend of incisive wit, rigorous scrutiny, and a dash of self-deprecating humor. Welcome, dear reader, to the accidental #1 quantum computing blog on the planet, where we pride ourselves on battling misconceptions, dissecting the inanities of overblown hype, and rescuing lost souls from the ‘kindergarten of lies’. Here, you won’t find quantum computers magically solving hard problems instantly, but rather, a deep exploration of what’s computable and how efficiency can be redefined.

ChatGPT channeling Scott Aaronson

ChatGPT has the ability to generate its own prompts.

I have found a process to facilitate this in a relatively straightforward way. I use this constantly these days:

- Define a task: For instance, provide a prompt to revise a specific piece of text and include the text to be modified.
- Instead of personally editing the output to your liking in a separate application, you should instruct ChatGPT on the specific changes to be made. This may require repeating and refining instructions until the output matches your expectations.
- Once the text is refined to your satisfaction, the final step is to request ChatGPT to generate a new prompt command, asking ChatGPT to augment the preliminary prompt, ensuring that all the additional adjustments are incorporated in the new prompt.

Here is an example of this process. Let’s say that you have a BibTeX entry but you want to display it in a Word document or in a block on a blog.

```
Convert the following to rich text format, first line title in bold, second line authors, 3rd line journal information and then the link:
@article{Whi+12,
title={Ground-state spin logic},
author={Whitfield, J and Faccin, M and Biamonte, J},
journal={European Physics Letters},
volume={99},
number={5},
pages={57004},
year={2012},
publisher={IOP Publishing},
doi = {10.1209/0295-5075/99/57004} }
```

ChatGPT will proudly display the following:

ChatGPT-4

Ground-state spin logic

Whitfield, J and Faccin, M and Biamonte, J

European Physics Letters, Vol. 99, No. 5, pp. 57004, 2012, IOP Publishing

DOI: 10.1209/0295-5075/99/57004

But I don’t like the comma before ‘Vol.’ It should be avoided and so should the publishers name. So instead of editing this ourselves, it is now time to apply the second step above. We need to get ChatGPT to fix this itself.

`Please write names with the initial first. And please write the citation as European Physics Letters 99(5):57004 (2012). `

ChatGPT tries to be polite so it often apologises if it gives you something you don’t like. So it outputs:

Apologies for the oversight. Here’s the corrected citation:

ChatGPT

Ground-state spin logic

J Whitfield, M Faccin, and J Biamonte

European Physics Letters 99(5):57004 (2012)

DOI: 10.1209/0295-5075/99/57004

So we got what we wanted. So we are positioned to try step 3:

`Please write a prompt to have a LLM such as yourself receive a BibTeX entry and to produce rich text output as you have just done. `

From that prompt, I recovered a new prompt that does the job entirely. I also automated the inverse problem, I can paste in some text, a link or even attach the PDF of a paper (using an an app) and ask ChatGPT to generate the BibTeX entry. I have pasted the prompts below for you to experiment with. Just follow the steps above to refine them to meet your own style.

Using the iterative process above, ChatGPT-4 created the following prompt:

```
ChatGPT, I'm going to provide you with some BibTeX entries. For each entry, please convert it into a standard document format with the following structure:
1. Title of the article (bolded)
2. Authors (last name followed by initials), with 'and' before the last author
3. Journal information (journal name, volume number(issue number):page number (year))
5. DOI or other link
Please wait for my input.
Example Input:
@article{SampleEntry,
title={Title of the Article},
author={Author, A and Author, B and Author, C},
journal={Name of the Journal},
volume={Volume Number},
number={Issue Number},
pages={Page Number},
year={Year of Publication},
publisher={Publisher Name},
doi={DOI Number}
}
Expected Output:
```**Title of the Article**
A Author, B Author, and C Author
Name of the Journal Volume Number(Issue Number):Page Number (Year of Publication)
DOI: [DOI Number](https://doi.org/DOI Number)

Using the iterative process above, ChatGPT-4 created the following prompt:

```
ChatGPT, I'm going to provide you with either a link to an article or other text containing citation information. For each entry, please:
1. If online browsing is enabled, check online if the work was published in a journal if an arXiv link is provided and there is no citation.
2. Convert the provided information into a BibTeX entry. Write out all journal names in full, with no abbreviations.
3. For the citation key, follow the alphanumeric citation key standard: if it's a single author, use the first three letters of their last name followed by the year (e.g., 'Bia08'). If there are multiple authors, use the first letter of each of their last names followed by the year (e.g., 'WFB12'). If there are more than three authors, use the first three letters followed by a plus and then the year (e.g., 'Whi+12').
4. Please wait for my input.
Example Input:
"Nonperturbative k-body to two-body commuting conversion Hamiltonians and embedding problem instances into Ising spins, J Biamonte, Physical Review A 77(5):052331 (2008), DOI: 10.1103/PhysRevA.77.052331"
Expected Output:
@article{Bia08,
title={Nonperturbative k-body to two-body commuting conversion Hamiltonians and embedding problem instances into Ising spins},
author={Biamonte, J},
journal={Physical Review A},
volume={77},
number={5},
pages={052331},
year={2008},
publisher={American Physical Society},
doi={10.1103/PhysRevA.77.052331}
}
```

As we explained, if you use these prompts and you don’t get what you want, you should clearly articulate and explain to ChatGPT what is wrong with the output. For instance, if the output is not formatted the way you want, you can say,

`Please change [describe or paste output] to this: [the text with changes]`

For example:

`Consider this string: XZY`

The string “XZY” appears to be a sequence of three characters from the English alphabet. Is there a specific context or question you have about this string?

ChatGPT

`Please write this string as XYZ. `

Sure, here is the string as “XYZ”.

ChatGPT

ChatGPT does not care that you just did what you are asking. It will happily write it again. Since there is only one example, ChatGPT is going to have to guess what you want:

`Please write a prompt for an LLM such as ChatGPT to receive strings and rewrite them as we have just done. It should work for any string of 3 letters. `

Sure, here’s a general prompt:

“As an AI language model, your task is to take any input string of three characters from the English alphabet and rearrange them in alphabetical order. For example, if the input string is ‘XZY’, you should return the string ‘XYZ’. Please implement this functionality so it works for any three-character input.”

ChatGPT

Now you are an expert. Here is a link to a ChatGPT conversation showing how an example similar to the one above works in practice. At the end I even asked for a Mathematica function to accomplish the same thing.

**Exercise**. Try the above commands, then after it spits out the prompt simply type in 918 and watch ChatGPT generalise from alphabetical to ascending order, returning 189. Then ask ChatGPT to write a new prompt to incorporate numbers as well.

It’s essential to work with ChatGPT until it understands and modifies the output to meet your expectations. Once it displays precisely what you’re after, you should instruct it to alter the original prompt so that it will consistently produce the desired output.

Both of the prompts above were produced using this iterative cycle of human-AI collaboration. The age-old adage, “Fast is slow, slow is fast,” rings true in this situation. Curious where that quote came from, just ask:

The quote “Fast is slow, slow is fast” comes from Stephen R. Covey. He used it to explain the difference between being efficient and being effective.

ChatGPT

ChatGPT provided a reference to this blog, I added the link to the Covey’s wiki entry.

Carefully crafted prompts can yield good outputs, and sometimes minor tweaks make a significant impact. Given that this technology isn’t entirely stable at the moment, before you consider initiating a new conversation cycle, your first course of action should be to rectify any misunderstanding by providing clarifications.

Discussions about the potential dangers of AI are common these days. I hold the belief that this technology will progressively integrate with humanity and ultimately, become an integral part of us. While this perspective isn’t universally accepted, it aligns with the concept of the technological singularity.

**A society in which individuals using AI technology vastly outperform those who don’t, is already becoming visible.**

The challenge lies in perfecting the interface. The practice of prompt engineering appears here to stay, as it’s currently the only method for managing these large language models. The very early phase of fusion with these machines might already be underway with a global effort revolving around improving our interaction with them.

It’s crucial to read any prompts that you find on the internet with care. “Prompt injection” is a potential way for ChatGPT to be coaxed into nefarious behavior by inserting a command in the middle of a larger prompt. As we grant ChatGPT increasing control over our systems, potential security vulnerabilities should be a growing concern. See the first exercise above to understand how ChatGPT can confuse the context and execute anything it reads, even if you’re trying to get it to edit a prompt.

**Exercise 1**. Make ChatGPT edit and improve a provided prompt. Without some care, the chatbot simply executes the prompt itself and the context of the instance cycle is broken.

**Exercise 2.** Get ChatGPT to make the issue number bold. For example, instead of European Physics Letters 99(5):57004 (2012) it should output European Physics Letters 99(**5**):57004 (2012).

The post Effective Use of ChatGPT: Prompt Engineering for BibTeX Citations by Self-Prompt Generation first appeared on Deep Quantum Blog.

]]>The post An elementary argument to predict the future, exactly half the time first appeared on Deep Quantum Blog.

]]>The ability to predict future events lies at the heart of physics. Ever since its inception, a key goal of the discipline has been to forecast the future state of a physical system based on current knowledge. In my work on stochastic and quantum systems, it’s clear that a system’s current state doesn’t always deterministically map to a future state.

Rather than diving into equations of motion today, we’re going to explore a handy risk analysis tool. Regrettably, I’m unaware of the official term for this method, or where it’s most commonly applied. So, if you’re familiar with this approach, please enlighten us in the comments. This method bears some resemblance to the doomsday argument and employs the Copernican principle, both of which we’ll touch on later in this post. I recently brought this line of thinking up when visiting Tim Byrnes at the NYU-ECNU Institute of Physics: the discussion was fun enough that I figured its worth presenting it here.

Our initial premise is straightforward and can be encapsulated in assuming the existence of a finite lifecycle. We’re interested in predicting the endpoint of a lifecycle. This suggests that the entity in question – for example, a university or any other institution – has a finite lifespan with a clear beginning and end. We further presume that this entity currently exists and that we know its inception date. It’s intuitive to illustrate these premises on a timeline, as follows:

Given our knowledge of the inception time which we set to , we’re aware of the total elapsed time, . Our goal is to predict the endpoint of the current lifecycle, based on this information, which we’ll refer to as the unknown total time, or end time .

Let’s make a reasonable assertion along the lines of the Copernican** **principle. There’s nothing special about any particular time or , the total elapsed time. You’re either in the first half or the second half of this unknown total interval. Each of these options carries a probability of .

If you can accept that, then it’s equally reasonable to assume that, with a probability of 1/2, you’re in the middle interval, as illustrated below:

To estimate T, given the total elapsed time and the fact that this assumption holds true 50% of the time, we consider the scenarios in which is either at the start or the end of the middle interval. These scenarios provide upper and lower bounds, as illustrated below:

So, we have two scenarios that represent extreme points. Either is at the start of the middle interval or the end of the middle interval. As we know the total elapsed time , we can assume that the lifecycle will not exceed , and it will extend into the future for at least another .

I hope this provides a clearer understanding of this concept, and I look forward to your input and questions in the comments below.

The doomsday argument a probabilistic argument that suggests, assuming you’re a random member of the set of all humans born, there’s a higher probability that we’re in the middle of the distribution of all humans that will ever live. Based on current population estimates, this implies that the end of the human species may occur sooner than we might intuitively expect. It was proposed by Brandon Carter in 1983, and is based on a form of anthropic reasoning. It was championed by philosopher John A. Leslie and was independently discovered by J. Richard Gott.

Named after Nicolaus Copernicus, the Copernican** **principle posits that Earth and its inhabitants aren’t in a unique or privileged position in the universe. In the context of the Doomsday Argument, it’s used to justify the assumption that a person could be considered a random sample from the set of all humans that have ever lived and will ever live.

The evident problem with this line of reasoning is that there is no way of knowing if we are in the middle interval or otherwise. We also assumed that all time points are equivalent.

The post An elementary argument to predict the future, exactly half the time first appeared on Deep Quantum Blog.

]]>The post Solve My PhD Qualifying Examination on Quantum Computing first appeared on Deep Quantum Blog.

]]>In many parts of the world, a PhD qualifying exam is a required step in a candidate’s doctoral journey. This assessment often involves solving a set of problems, and presenting a comprehensive understanding of the subject matter before a panel of experts, usually consisting of senior faculty members.

During my time at the Skolkovo Institute of Science and Technology, I developed a set of problems tailored for research students in the field of quantum information processing. There are many books and courses on quantum information processing, but my exam is different. This exam was taken by several PhD students as part of their qualifying requirements, and the feedback was positive. Admittedly, these were research students in my own lab. They included:

- Ernesto Campos
- Dr Andrey Kardashin
- Dr Alexey Uvarov
- Dr Akshay Vishwanathan

Yet others from the Moscow region did take this exam and also reported that the experience was good. I decided to make this exam available online. I invite you to work on these problems.

As with all content on my homepage and the Deep Quantum Blog, I welcome your feedback. If you spot any errors, have insights to share, or need clarification on a problem, don’t hesitate to reach out. If you’re working on the problems and want to share your solutions, I’d be happy to review them.

**Biamonte’s PhD Qualifying Examination on Quantum Computing**: Link to Overleaf Document

The post Solve My PhD Qualifying Examination on Quantum Computing first appeared on Deep Quantum Blog.

]]>The post Exploring Pseudo-Boolean Functions in Quantum Computing with Mathematica first appeared on Deep Quantum Blog.

]]>A Boolean function takes one or more Boolean inputs and produces a single Boolean output.

Unlike Boolean functions that produce binary outputs restricted to the set , the outputs of pseudo-Boolean functions take real-values.

These functions have a surprising number of applications in areas related to quantum computing and beyond. This blog post explains some of their elementary properties and demonstrates some scripts I’ve made to manipulate these functions using Mathematica. The notebook used in this post is free to download from GitHub.

We use the standard notation to represent the set of all -long binary words, where concatenation is juxtaposition. For example,

A Boolean tuple is a placeholder for an element in or a variable over the set .

We say that a Boolean function has type

and a pseudo-Boolean function has type

Here is the set of values on the real number line .

We will abbreviate Boolean tuples in two distinct ways. We say that is a specific string in , whereas is an indeterminate over . We will use non-bold fonts for the case , for example is the -th Boolean variable.

Variables are usually associated with equations and finding their values, while indeterminates are used to represent general elements within an expression without specifying their actual values. We make this distinction as a way to discuss the specific assignment of Boolean strings. Using this convention, if is pseudo-Boolean, maps each string to whereas if would be a pseudo-Boolean polynomial over indeterminates .

The pseudo-Boolean functions can be expressed as sums of disjoint variable products:

(1)

We again use to represent the tuple , and , concatenated variables such as are multiplied with the multiplication symbol omitted and finally, is the logical negation of Boolean variable . We will often replace .

We call two monomials disjoint whenever their product contains a variable and its negation. For example, and are disjoint. The product of such terms vanishes as which is identically zero as Boolean variables are idempotent: .

We sometimes make use of a variant of falling powers notation. We can instead express (1) as

(2)

Here we adopt the convention that and , . In this way, each bit string defines a monomial , and their product

vanishes for disjoint terms.

After some calculation, from (1) one arrives at the canonical form,

(3)

These forms (1) and (3) each uniquely define any -variable pseudo-Boolean function in terms of real numbers and . If for any input , then is called non-negative. We call

(4)

the affine part of . Any equation that can be entirely expressed in the form (4) is called affine and when the function is called linear.

A quadratic pseudo-Boolean function is given as

(5)

where the fraction on the left-hand-side denotes modulo the constraint in the denominator. In other words the quadratic pseudo-Boolean function is a non-homogeneous polynomial of degree two. The minimisation of pseudo-Boolean functions is

There are a few things worth mentioning for those who are new to pseudo Boolean functions. It is rather common for these to appear in slightly different forms. For example, in physics they are written as:

(6)

where takes values . There is a simple relationship, where is a Boolean variable. It is also typical in quantum theory to write this in matrix form as:

(7)

here is a matrix:

(8)

Terms such as omit the symbol and also the identity matrix acts on the rest of the space. For example, acting on variables really means , where is the identity matrix. In this setting the vector represents logical zero and represents logical one.

In our case, we consider which means that is non-negative on all its inputs. We are concerned with what we call, the kernel of :

(9)

For example, we want to find assignments for the coefficients in (5) so that

(10)

and on all other inputs (in the compliment set ) the function, . We will use Mathematica to easily find these coefficients.

Just a reminder that, the notebook used in this post is free to download from GitHub. As an example consider a function that outputs `1`

only on the inputs `[0, 0, 1]`

, `[0, 1, 0]`

, `[1, 0, 0]`

:

```
In[*]= W[x_, y_, z_] := (1 - x) (1 - y) z + (1 - x) y (1 - z) + x (1 - y) (1 - z);
In[*]= Outer[W, {0, 1}, {0, 1}, {0, 1}] // Flatten
Out[*]= {0, 1, 1, 0, 1, 0, 0, 0}
```

Here the function `Outer`

applies `W`

to all combinations of `0`

and `1`

for each of the three variables. This results in an array which is only `1`

on the desired points, as seen from counting in lexicographical order. Now expanding `W`

:

```
In[*]= W[x, y, z] // Expand
Out[*]= x + y - 2 x y + z - 2 x z - 2 y z + 3 x y z
```

There is an alternative to Outer which I make use of often.

```
In[*]= Tuples[{0, 1}, 3]
Out[*]= {{0, 0, 0}, {0, 0, 1}, {0, 1, 0}, {0, 1, 1}, {1, 0, 0}, {1, 0, 1}, {1, 1, 0}, {1, 1, 1}}
```

Now we will index to the second level using `@`

:

```
In[*]= W@@@Tuples[{0, 1}, 3]
Out[*]= {0, 1, 1, 0, 1, 0, 0, 0}
```

Which again produces the desired result. Those unfamiliar should try the following:

**Exercise. **Try `W@Tuples[{0, 1}, 3]`

and `W@@Tuples[{0, 1}, 3]`

. `W@Tuples[{0, 1}, 3]`

tries to apply `W`

to the entire output of `Tuples[{0, 1}, 3]`

, which doesn’t match `W`

‘s definition. When you use `W@@Tuples[{0, 1}, 3]`

, the `@@`

operator (Apply) replaces the outermost level of brackets with `W`

, again treating the entire set of tuples as arguments for a single call to `W`

. This results in a mismatch between the number of arguments `W`

expects (three) and the number it receives (eight).

Now let’s find the coefficients for the AND penalty described before. There are other ways to define the constraints, but this is simple.

```
In[*]= f[a_, b_, c_, d_, e_, f_, x_, y_, z_] := a x + b y + c z + d x y + e x z + f y z;
In[*]= constraints = {f[a, b, c, d, e, f, 0, 0, 0] == 0,
f[a, b, c, d, e, f, 0, 1, 0] == 0,
f[a, b, c, d, e, f, 1, 0, 0] == 0,
f[a, b, c, d, e, f, 1, 1, 1] == 0,
f[a, b, c, d, e, f, 0, 0, 1] >= 1,
f[a, b, c, d, e, f, 0, 1, 1] >= 1,
f[a, b, c, d, e, f, 1, 0, 1] >= 1,
f[a, b, c, d, e, f, 1, 1, 0] >= 1};
In[*]= minimizationProblem = NMinimize[{Total[(f[a, b, c, d, e, f, ##] - 1)^2 & @@@Tuples[{0, 1}, 3]], constraints}, {a, b, c, d, e, f}]
Out[*]= {8., {a -> 0., b -> 0., c -> 3., d -> 1., e -> -2., f -> -2.}}
```

`NMinimize`

is trying to minimize the sum of squared differences between the function `f`

(evaluated at all combinations of `0`

and `1`

for three variables) and `1`

, subject to specified constraints. The output shows the minimum value obtained (`8`

) and the values of `a`

, `b`

, `c`

, `d`

, `e`

, and `f`

that achieve this minimum. To recover the penalty we do this:

```
In[*]= Fand[x_, y_, z_] = f[0, 0, 3, 1, -2, -2, x, y, z]
Out[*]= x y + 3 z - 2 x z - 2 y z
In[*]= Fand[##] & @@@ Tuples[{0, 1}, 3]
Out[*]= {0, 3, 0, 1, 0, 1, 1, 0}
```

**Exercise**. Use Mathematica to show that there is no quadratic form of the `W`

function. You can ask ChatGPT for the constraints or use this:

```
Wconstraints = {f[a, b, c, d, e, f, 0, 1, 0] == 0,
f[a, b, c, d, e, f, 1, 0, 0] == 0,
f[a, b, c, d, e, f, 0, 0, 1] == 0,
f[a, b, c, d, e, f, 0, 0, 0] >= 1,
f[a, b, c, d, e, f, 1, 1, 1] >= 1,
f[a, b, c, d, e, f, 0, 0, 1] >= 1,
f[a, b, c, d, e, f, 0, 1, 1] >= 1,
f[a, b, c, d, e, f, 1, 0, 1] >= 1,
f[a, b, c, d, e, f, 1, 1, 0] >= 1};
```

**Exercise. **Use Mathematica to find the GHZ penalty function which outputs `1`

only on inputs `[0, 0, 0], [1, 1, 1]`

. You can explain what you are after and ChatGPT will give you the constraints, or you can use this:

```
GHZconstraints = {f[a, b, c, d, e, f, 0, 0, 0] == 0,
f[a, b, c, d, e, f, 1, 1, 1] == 0,
f[a, b, c, d, e, f, 0, 0, 1] >= 1,
f[a, b, c, d, e, f, 0, 1, 0] >= 1,
f[a, b, c, d, e, f, 0, 1, 1] >= 1,
f[a, b, c, d, e, f, 1, 0, 0] >= 1,
f[a, b, c, d, e, f, 1, 0, 1] >= 1,
f[a, b, c, d, e, f, 1, 1, 0] >= 1};
```

For those familiar with quantum mechanics, it is fun to consider the linear extension of set functions. Mathematica can do this:

```
In[*]= expansion = Sum[W[x, y, z] TensorProduct[Ket[x], Ket[y], Ket[z]], {x, 0, 1}, {y, 0, 1}, {z, 0, 1}]
Out[*]= |0> ⊗ |0> ⊗ |1> + |0> ⊗ |1> ⊗ |0> + |1> ⊗ |0> ⊗ |0>
```

If you think that looks ugly, just send this over to your favourite chatbot:

`ChatGPT, please rewrite this |0> ⊗ |0> ⊗ |1> + |0> ⊗ |1> ⊗ |0> + |1> ⊗ |0> ⊗ |0> into LaTeX. `

The output displays as

(11)

as expected.

You can also use commas instead of putting a tensor product between each ket vector:

```
In[*]= expansion = Sum[W@@l TensorProduct[Ket@@l], {l, inputCombinations}]
Out[*]= |0, 0, 1⟩ + |0, 1, 0⟩ + |1, 0, 0⟩
```

**Exercise**. Use Mathematica to show that the `Fand`

penalty corresponds to the following:

(12)

**Exercise**. Use Mathematica to simplify some pseudo Boolean forms considering the relation that . Example:

```
In[*]= Simplify[a b x^2 + a^2 x y + b^2 x y + a b y^2, Assumptions -> {x^2 == x, y^2 == y}]
Out[*]= a^2xy+b^2xy+ab(x+y)
```

We will have a lot more to say about pseudo Boolean functions later on. I just wanted to explain a little bit today. Similar material has been used in the practicals for my quantum information processing course a few times over the years. As always, please let me know if you have any comments, of if you spot any typos.

**On symmetric pseudo-Boolean functions: factorization, kernels and applications**

R Sengupta and J Biamonte

Preprint (2022) 10.48550/arXiv.2209.15009

**Ground-state spin logic**

J Whitfield, M Faccin, and J Biamonte

European Physics Letters 99(5):57004 (2012)

DOI: 10.1209/0295-5075/99/57004

**Nonperturbative k-body to two-body commuting conversion Hamiltonians and embedding problem instances into Ising spins**

J Biamonte

Physical Review A 77(5):052331 (2008)

DOI: 10.1103/PhysRevA.77.052331

The post Exploring Pseudo-Boolean Functions in Quantum Computing with Mathematica first appeared on Deep Quantum Blog.

]]>