Reference Implementations
Sometimes the obvious implementation can be accelerated dramatically.
It is useful to have a reference implementation where things are done the totally obvious way so that smarter or faster approaches have a fixed target for correctness.
Note
Everything here can be considered an implementation detail and a user should not need the reference implementations at all.
Generators
Villain NeighborhoodUpdate
- class supervillain.generator.reference_implementation.villain.NeighborhoodUpdateSlow(action, interval_phi=3.141592653589793, interval_n=1)[source]
A neighborhood update changes only fields in some small area of the lattice.
In particular, this updating scheme changes the \(\phi\) and \(n\) fields in the
Villain
formulation.It works by picking a site \(x\) at random, proposing a change
\[\begin{split}\begin{align} \Delta\phi_x &\sim \text{uniform}(-\texttt{interval_phi}, +\texttt{interval_phi}) \\ \Delta n_\ell &\sim [-\texttt{interval_n}, +\texttt{interval_n}] \end{align}\end{split}\]for the \(\phi\) on \(x\) and \(n\) on links \(\ell\) which touch \(x\).
Warning
Because we currently restrict to \(W=1\) for the Villain formulation we do not update \(v\).
- Parameters
action (Villain) – The action from which we sample.
interval_phi (float) – A single float used to construct the uniform distribution for \(\phi\).
interval_n (int) – A single integer that gives the biggest allowed changes to \(n\).
- proposal(cfg, dx)[source]
- Parameters
cfg (dict) – A dictionary with \(\phi\) and \(n\) to update.
dx (Lattice coordinates) – Which site to move to the origin and update.
- Returns
A new configuration with updated \(\phi\) and \(n\).
- Return type
dict
- site(cfg, dx)[source]
Rather than accepting every
proposal()
we perform importance sampling by doing a Metropolis accept/reject step [6] on every single-site proposal.- Parameters
cfg (dict) – A dictionary with \(\phi\) and \(n\) to update.
dx (Lattice coordinates) – Which site to move to the origin and update.
- Returns
dict – A configuration; either the provided one a new one changed by a proposal.
float – The Metropolis-Hastings acceptance probability.
int – 1 if the proposal was accepted, 0 otherwise.
Villain Classic Worm
- class supervillain.generator.reference_implementation.villain.ClassicWorm(S)[source]
This implements the classic worm of Prokof’ev and Svistunov [3] for the Villain links \(n\in\mathbb{Z}\) which satisfy \(dn \equiv 0 \) (mod W) on every plaquette.
On top of a constraint-satisfying configuration we put down a worm and let the head move, changing the crossed links. We uniformly propose a move in all 4 directions and Metropolize the change.
Additionally, when the head and tail coincide, we allow a fifth possible move, where we remove the worm and emit the updated \(z\) configuration into the Markov chain.
As we evolve the worm we tally the histogram that yields the
Vortex_Vortex
correlation function.Warning
This update algorithm is not ergodic on its own. It doesn’t change \(\phi\) at all and even leaves \(dn\) alone (while changing \(n\) itself). It can be used, for example,
Sequentially
with theSiteUpdate
andLinkUpdate
for an ergodic method.Warning
Because the algorithm is about moving a single defect around the lattice, when implemented in pure python the python-level loop can severely impact performance. While this reference implementation was done in pure python,
the production-ready generator
uses numba for acceleration.Note
Because it doesn’t change \(dn\) at all, this algorithm can play an important role in sampling the \(W=\infty\) sector, where all vortices are completely killed, though updates to \(\phi\) would still be needed.
- inline_observables(steps)[source]
The worm algorithm can measure the
Vortex_Vortex
correlator. We also store theWorm_Length
for each step.
Wordline Classic Worm
- class supervillain.generator.reference_implementation.worldline.ClassicWorm(S)[source]
This implements the classic worm of Prokof’ev and Svistunov [3] for the worldline links \(m\in\mathbb{Z}\) which satisfy \(\delta m = 0\) on every site.
On top of a constraint-satisfying configuration we put down a worm and let the head move, changing the crossed links. We uniformly propose a move in all 4 directions and Metropolize the change.
Additionally, when the head and tail coincide, we allow a fifth possible move, where we remove the worm and emit the updated \(z\) configuration into the Markov chain.
As we evolve the worm we tally the histogram that yields the
Spin_Spin
correlation function.Warning
When \(W>1\) this update algorithm is not ergodic on its own. It doesn’t change \(v\) at all. However, when \(W=1\) we can always pick \(v=0\) (any other choice may be absorbed into \(m\)), and this generator can stand alone.
- inline_observables(steps)[source]
The worm algorithm can measure the
Spin_Spin
correlator. We also store theWorm_Length
for each step.
Observables
Spin Correlations
- class supervillain.observable.reference_implementation.spin.Spin_SpinSloppy[source]
Bases:
Observable
This performs the same measurement as the non-Sloppy version but does not get all the juice out of every Worldline configuration.
See the
Spin_Spin
documentation for detailed descriptions.
- class supervillain.observable.reference_implementation.spin.Spin_SpinSlow[source]
Bases:
Observable
We can deform \(Z_J \rightarrow Z_{J}[x,y]\) to include the creation of a boson at \(y\) and the destruction of a boson at \(x\) in the action. We define the expectation value
\[S_{x,y} = \frac{1}{Z_J} Z_J[x,y]\]and reduce to a single relative coordinate
\[\texttt{Spin_Spin}_{\Delta x} = S_{\Delta x} = \frac{1}{\Lambda} \sum_x S_{x,x-\Delta x}\]See also
Compared to
Spin_SpinSloppy
this implementation gets more juice from each configuration. In other words, for a fixed configuration their results will differ, but they will agree in expectation.In contrast, this observable produces the same numerical values as the production implementation
Spin_Spin
, which is much faster.- static Worldline(S, Links)[source]
Computes the same result as
Spin_Spin
but more slowly. Compared toSpin_SpinSloppy
we measure the same correlator but get more juice from each configuration by averaging over translations.