diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 15aa455..0882e9d 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-07-26T17:15:57","documenter_version":"1.5.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-07-27T16:55:07","documenter_version":"1.5.0"}} \ No newline at end of file diff --git a/dev/api/index.html b/dev/api/index.html index 2fd6e54..9cbe1c0 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -1,3 +1,3 @@ -API · Krotov.jl

API

Krotov.KrotovResultType

Result object returned by optimize_krotov.

Attributes

The attributes of a KrotovResult object include

  • iter: The number of the current iteration
  • J_T: The value of the final-time functional in the current iteration
  • J_T_prev: The value of the final-time functional in the previous iteration
  • tlist: The time grid on which the control are discretized.
  • guess_controls: A vector of the original control fields (each field discretized to the points of tlist)
  • optimized_controls: A vector of the optimized control fields. Calculated only at the end of the optimization, not after each iteration.
  • tau_vals: For any trajectory that defines a target_state, the complex overlap of that target state with the propagated state. For any trajectory for which the target_state is nothing, the value is zero.
  • records: A vector of tuples with values returned by a callback routine passed to optimize
  • converged: A boolean flag on whether the optimization is converged. This may be set to true by a check_convergence function.
  • message: A message string to explain the reason for convergence. This may be set by a check_convergence function.

All of the above attributes may be referenced in a check_convergence function passed to optimize(problem; method=Krotov)

source
Krotov.KrotovWrkType

Krotov workspace.

The workspace is for internal use. However, it is also accessible in a callback function. The callback may use or modify some of the following attributes:

  • trajectories: a copy of the trajectories defining the control problem
  • adjoint_trajectories: The trajectories with the adjoint generator
  • kwargs: The keyword arguments from the ControlProblem or the call to optimize.
  • controls: A tuple of the original controls (probably functions)
  • ga_a_int: The current value of $∫gₐ(t)dt$ for each control
  • update_shapes: The update shapes $S(t)$ for each pulse, discretized on the intervals of the time grid.
  • lambda_vals: The current value of λₐ for each control
  • result: The current result object
  • fw_storage: The storage of states for the forward propagation
  • fw_propagators: The propagators used for the forward propagation
  • bw_propagators: The propagators used for the backward propagation
  • use_threads: Flag indicating whether the propagations are performed in parallel.
source
QuantumControlBase.optimizeMethod
using Krotov
-result = optimize(problem; method=Krotov, kwargs...)

optimizes the given control problem using Krotov's method, returning a KrotovResult.

Keyword arguments that control the optimization are taken from the keyword arguments used in the instantiation of problem; any of these can be overridden with explicit keyword arguments to optimize.

Required problem keyword arguments

  • J_T: A function J_T(Ψ, trajectories) that evaluates the final time functional from a list Ψ of forward-propagated states and problem.trajectories. The function J_T may also take a keyword argument tau. If it does, a vector containing the complex overlaps of the target states (target_state property of each trajectory in problem.trajectories) with the propagated states will be passed to J_T.

Recommended problem keyword arguments

  • lambda_a=1.0: The inverse Krotov step width λₐ for every pulse.
  • update_shape=(t->1.0): A function S(t) for the "update shape" that scales the update for every pulse.

If different controls require different lambda_a or update_shape, a dict pulse_options must be given instead of a global lambda_a and update_shape; see below.

Optional problem keyword arguments

The following keyword arguments are supported (with default values):

  • pulse_options: A dictionary that maps every control (as obtained by get_controls from the problem.trajectories) to the following dict:

    • :lambda_a: The value for inverse Krotov step width λₐ.
    • :update_shape: A function S(t) for the "update shape" that scales the Krotov pulse update.

    This overrides the global lambda_a and update_shape arguments.

  • chi: A function chi(Ψ, trajectories) that receives a list Ψ of the forward propagated states and returns a vector of states $|χₖ⟩ = -∂J_T/∂⟨Ψₖ|$. If not given, it will be automatically determined from J_T via make_chi with the default parameters. Similarly to J_T, if chi accepts a keyword argument tau, it will be passed a vector of complex overlaps.

  • sigma=nothing: A function that calculates the second-order contribution. If not given, the first-order Krotov method is used.

  • iter_start=0: The initial iteration number.

  • iter_stop=5000: The maximum iteration number.

  • prop_method: The propagation method to use for each trajectory; see below.

  • print_iters=true: Whether to print information after each iteration.

  • store_iter_info=Set(): Which fields from print_iters to store in result.records. A subset of Set(["iter.", "J_T", "∫gₐ(t)dt", "J", "ΔJ_T", "ΔJ", "secs"]).

  • callback: A function (or tuple of functions) that receives the Krotov workspace, the iteration number, the list of updated pulses, and the list of guess pulses as positional arguments. The function may return a tuple of values which are stored in the KrotovResult object result.records. The function can also mutate any of its arguments, in particular the updated pulses. This may be used, e.g., to apply a spectral filter to the updated pulses or to perform similar manipulations. Note that print_iters=true (default) adds an automatic callback to print information after each iteration. With store_iter_info, that callback automatically stores a subset of the printed information.

  • check_convergence: A function to check whether convergence has been reached. Receives a KrotovResult object result, and should set result.converged to true and result.message to an appropriate string in case of convergence. Multiple convergence checks can be performed by chaining functions with . The convergence check is performed after any callback.

  • verbose=false: If true, print information during initialization.

  • rethrow_exceptions: By default, any exception ends the optimization but still returns a KrotovResult that captures the message associated with the exception. This is to avoid losing results from a long-running optimization when an exception occurs in a later iteration. If rethrow_exceptions=true, instead of capturing the exception, it will be thrown normally.

Trajectory propagation

Krotov's method involves the forward and backward propagation for every Trajectory in the problem. The keyword arguments for each propagation (see propagate) are determined from any properties of each Trajectory that have a prop_ prefix, cf. init_prop_trajectory.

In situations where different parameters are required for the forward and backward propagation, instead of the prop_ prefix, the fw_prop_ and bw_prop_ prefixes can be used, respectively. These override any setting with the prop_ prefix. This applies both to the properties of each Trajectory and the problem keyword arguments.

Note that the propagation method for each propagation must be specified. In most cases, it is sufficient (and recommended) to pass a global prop_method problem keyword argument.

source
+API · Krotov.jl

API

Krotov.KrotovResultType

Result object returned by optimize_krotov.

Attributes

The attributes of a KrotovResult object include

  • iter: The number of the current iteration
  • J_T: The value of the final-time functional in the current iteration
  • J_T_prev: The value of the final-time functional in the previous iteration
  • tlist: The time grid on which the control are discretized.
  • guess_controls: A vector of the original control fields (each field discretized to the points of tlist)
  • optimized_controls: A vector of the optimized control fields. Calculated only at the end of the optimization, not after each iteration.
  • tau_vals: For any trajectory that defines a target_state, the complex overlap of that target state with the propagated state. For any trajectory for which the target_state is nothing, the value is zero.
  • records: A vector of tuples with values returned by a callback routine passed to optimize
  • converged: A boolean flag on whether the optimization is converged. This may be set to true by a check_convergence function.
  • message: A message string to explain the reason for convergence. This may be set by a check_convergence function.

All of the above attributes may be referenced in a check_convergence function passed to optimize(problem; method=Krotov)

source
Krotov.KrotovWrkType

Krotov workspace.

The workspace is for internal use. However, it is also accessible in a callback function. The callback may use or modify some of the following attributes:

  • trajectories: a copy of the trajectories defining the control problem
  • adjoint_trajectories: The trajectories with the adjoint generator
  • kwargs: The keyword arguments from the ControlProblem or the call to optimize.
  • controls: A tuple of the original controls (probably functions)
  • ga_a_int: The current value of $∫gₐ(t)dt$ for each control
  • update_shapes: The update shapes $S(t)$ for each pulse, discretized on the intervals of the time grid.
  • lambda_vals: The current value of λₐ for each control
  • result: The current result object
  • fw_storage: The storage of states for the forward propagation
  • fw_propagators: The propagators used for the forward propagation
  • bw_propagators: The propagators used for the backward propagation
  • use_threads: Flag indicating whether the propagations are performed in parallel.
source
QuantumControlBase.optimizeMethod
using Krotov
+result = optimize(problem; method=Krotov, kwargs...)

optimizes the given control problem using Krotov's method, returning a KrotovResult.

Keyword arguments that control the optimization are taken from the keyword arguments used in the instantiation of problem; any of these can be overridden with explicit keyword arguments to optimize.

Required problem keyword arguments

  • J_T: A function J_T(Ψ, trajectories) that evaluates the final time functional from a list Ψ of forward-propagated states and problem.trajectories. The function J_T may also take a keyword argument tau. If it does, a vector containing the complex overlaps of the target states (target_state property of each trajectory in problem.trajectories) with the propagated states will be passed to J_T.

Recommended problem keyword arguments

  • lambda_a=1.0: The inverse Krotov step width λₐ for every pulse.
  • update_shape=(t->1.0): A function S(t) for the "update shape" that scales the update for every pulse.

If different controls require different lambda_a or update_shape, a dict pulse_options must be given instead of a global lambda_a and update_shape; see below.

Optional problem keyword arguments

The following keyword arguments are supported (with default values):

  • pulse_options: A dictionary that maps every control (as obtained by get_controls from the problem.trajectories) to the following dict:

    • :lambda_a: The value for inverse Krotov step width λₐ.
    • :update_shape: A function S(t) for the "update shape" that scales the Krotov pulse update.

    This overrides the global lambda_a and update_shape arguments.

  • chi: A function chi(Ψ, trajectories) that receives a list Ψ of the forward propagated states and returns a vector of states $|χₖ⟩ = -∂J_T/∂⟨Ψₖ|$. If not given, it will be automatically determined from J_T via make_chi with the default parameters. Similarly to J_T, if chi accepts a keyword argument tau, it will be passed a vector of complex overlaps.

  • sigma=nothing: A function that calculates the second-order contribution. If not given, the first-order Krotov method is used.

  • iter_start=0: The initial iteration number.

  • iter_stop=5000: The maximum iteration number.

  • prop_method: The propagation method to use for each trajectory; see below.

  • print_iters=true: Whether to print information after each iteration.

  • store_iter_info=Set(): Which fields from print_iters to store in result.records. A subset of Set(["iter.", "J_T", "∫gₐ(t)dt", "J", "ΔJ_T", "ΔJ", "secs"]).

  • callback: A function (or tuple of functions) that receives the Krotov workspace, the iteration number, the list of updated pulses, and the list of guess pulses as positional arguments. The function may return a tuple of values which are stored in the KrotovResult object result.records. The function can also mutate any of its arguments, in particular the updated pulses. This may be used, e.g., to apply a spectral filter to the updated pulses or to perform similar manipulations. Note that print_iters=true (default) adds an automatic callback to print information after each iteration. With store_iter_info, that callback automatically stores a subset of the printed information.

  • check_convergence: A function to check whether convergence has been reached. Receives a KrotovResult object result, and should set result.converged to true and result.message to an appropriate string in case of convergence. Multiple convergence checks can be performed by chaining functions with . The convergence check is performed after any callback.

  • verbose=false: If true, print information during initialization.

  • rethrow_exceptions: By default, any exception ends the optimization but still returns a KrotovResult that captures the message associated with the exception. This is to avoid losing results from a long-running optimization when an exception occurs in a later iteration. If rethrow_exceptions=true, instead of capturing the exception, it will be thrown normally.

Trajectory propagation

Krotov's method involves the forward and backward propagation for every Trajectory in the problem. The keyword arguments for each propagation (see propagate) are determined from any properties of each Trajectory that have a prop_ prefix, cf. init_prop_trajectory.

In situations where different parameters are required for the forward and backward propagation, instead of the prop_ prefix, the fw_prop_ and bw_prop_ prefixes can be used, respectively. These override any setting with the prop_ prefix. This applies both to the properties of each Trajectory and the problem keyword arguments.

Note that the propagation method for each propagation must be specified. In most cases, it is sufficient (and recommended) to pass a global prop_method problem keyword argument.

source
diff --git a/dev/examples/index.html b/dev/examples/index.html index 80a0695..b7a5988 100644 --- a/dev/examples/index.html +++ b/dev/examples/index.html @@ -1,2 +1,2 @@ -Examples · Krotov.jl
+Examples · Krotov.jl
diff --git a/dev/externals/index.html b/dev/externals/index.html index 1c9c268..4659ee4 100644 --- a/dev/externals/index.html +++ b/dev/externals/index.html @@ -3,25 +3,25 @@ trajectories, tlist; kwargs... -)

The trajectories are a list of Trajectory instances, each defining an initial state and a dynamical generator for the evolution of that state. Usually, the trajectory will also include a target state (see Trajectory) and possibly a weight. The trajectories may also be given together with tlist as a mandatory keyword argument.

The tlist is the time grid on which the time evolution of the initial states of each trajectory should be propagated. It may also be given as a (mandatory) keyword argument.

The remaining kwargs are keyword arguments that are passed directly to the optimal control method. These typically include e.g. the optimization functional.

The control problem is solved by finding a set of controls that minimize an optimization functional over all trajectories.

source
QuantumControlBase.TrajectoryType

Description of a state's time evolution.

Trajectory(
+)

The trajectories are a list of Trajectory instances, each defining an initial state and a dynamical generator for the evolution of that state. Usually, the trajectory will also include a target state (see Trajectory) and possibly a weight. The trajectories may also be given together with tlist as a mandatory keyword argument.

The tlist is the time grid on which the time evolution of the initial states of each trajectory should be propagated. It may also be given as a (mandatory) keyword argument.

The remaining kwargs are keyword arguments that are passed directly to the optimal control method. These typically include e.g. the optimization functional.

The control problem is solved by finding a set of controls that minimize an optimization functional over all trajectories.

source
QuantumControlBase.TrajectoryType

Description of a state's time evolution.

Trajectory(
     initial_state,
     generator;
     target_state=nothing,
     weight=1.0,
     kwargs...
-)

describes the time evolution of the initial_state under a time-dependent dynamical generator (e.g., a Hamiltonian or Liouvillian).

Trajectories are central to quantum control problems: an optimization functional depends on the result of propagating one or more trajectories. For example, when optimizing for a quantum gate, the optimization considers the trajectories of all logical basis states.

In addition to the initial_state and generator, a Trajectory may include data relevant to the propagation and to evaluating a particular optimization functional. Most functionals have the notion of a "target state" that the initial_state should evolve towards, which can be given as the target_state keyword argument. In some functionals, different trajectories enter with different weights [8], which can be given as a weight keyword argument. Any other keyword arguments are also available to a functional as properties of the Trajectory .

A Trajectory can also be instantiated using all keyword arguments.

Properties

All keyword arguments used in the instantiation are available as properties of the Trajectory. At a minimum, this includes initial_state, generator, target_state, and weight.

By convention, properties with a prop_ prefix, e.g., prop_method, will be taken into account when propagating the trajectory. See propagate_trajectory for details.

source
Base.adjointMethod

Construct the adjoint of a Trajectory.

adj_trajectory = adjoint(trajectory)

The adjoint trajectory contains the adjoint of the dynamical generator traj.generator. All other fields contain a copy of the original field value.

The primary purpose of this adjoint is to facilitate the backward propagation under the adjoint generator that is central to gradient-based optimization methods such as GRAPE and Krotov's method.

source
QuantumControlBase.chain_callbacksMethod

Combine multiple callback functions.

chain_callbacks(funcs...)

combines funcs into a single Function that can be passes as callback to ControlProblem or any optimize-function.

Each function in func must be a suitable callback by itself. This means that it should receive the optimization workspace object as its first positional parameter, then positional parameters specific to the optimization method, and then an arbitrary number of data parameters. It must return either nothing or a tuple of "info" objects (which will end up in the records field of the optimization result).

When chaining callbacks, the funcs will be called in series, and the "info" objects will be accumulated into a single result tuple. The combined results from previous funcs will be given to the subsequent funcs as data parameters. This allows for the callbacks in the chain to communicate.

The chain will return the final combined result tuple, or nothing if all funcs return nothing.

Note

When calling optimize, any callback that is a tuple will be automatically processed with chain_callbacks. Thus, chain_callbacks rarely has to be invoked manually.

source
QuantumControlBase.check_amplitudeMethod

Check an amplitude in a Generator in the context of optimal control.

@test check_amplitude(
+)

describes the time evolution of the initial_state under a time-dependent dynamical generator (e.g., a Hamiltonian or Liouvillian).

Trajectories are central to quantum control problems: an optimization functional depends on the result of propagating one or more trajectories. For example, when optimizing for a quantum gate, the optimization considers the trajectories of all logical basis states.

In addition to the initial_state and generator, a Trajectory may include data relevant to the propagation and to evaluating a particular optimization functional. Most functionals have the notion of a "target state" that the initial_state should evolve towards, which can be given as the target_state keyword argument. In some functionals, different trajectories enter with different weights [8], which can be given as a weight keyword argument. Any other keyword arguments are also available to a functional as properties of the Trajectory .

A Trajectory can also be instantiated using all keyword arguments.

Properties

All keyword arguments used in the instantiation are available as properties of the Trajectory. At a minimum, this includes initial_state, generator, target_state, and weight.

By convention, properties with a prop_ prefix, e.g., prop_method, will be taken into account when propagating the trajectory. See propagate_trajectory for details.

source
Base.adjointMethod

Construct the adjoint of a Trajectory.

adj_trajectory = adjoint(trajectory)

The adjoint trajectory contains the adjoint of the dynamical generator traj.generator. All other fields contain a copy of the original field value.

The primary purpose of this adjoint is to facilitate the backward propagation under the adjoint generator that is central to gradient-based optimization methods such as GRAPE and Krotov's method.

source
QuantumControlBase.chain_callbacksMethod

Combine multiple callback functions.

chain_callbacks(funcs...)

combines funcs into a single Function that can be passes as callback to ControlProblem or any optimize-function.

Each function in func must be a suitable callback by itself. This means that it should receive the optimization workspace object as its first positional parameter, then positional parameters specific to the optimization method, and then an arbitrary number of data parameters. It must return either nothing or a tuple of "info" objects (which will end up in the records field of the optimization result).

When chaining callbacks, the funcs will be called in series, and the "info" objects will be accumulated into a single result tuple. The combined results from previous funcs will be given to the subsequent funcs as data parameters. This allows for the callbacks in the chain to communicate.

The chain will return the final combined result tuple, or nothing if all funcs return nothing.

Note

When calling optimize, any callback that is a tuple will be automatically processed with chain_callbacks. Thus, chain_callbacks rarely has to be invoked manually.

source
QuantumControlBase.check_amplitudeMethod

Check an amplitude in a Generator in the context of optimal control.

@test check_amplitude(
     ampl; tlist, for_gradient_optimization=true, quiet=false
-)

verifies that the given ampl is a valid element in the list of amplitudes of a Generator object. This checks all the conditions of QuantumPropagators.Interfaces.check_amplitude. In addition, the following conditions must be met.

If for_gradient_optimization:

The function returns true for a valid amplitude and false for an invalid amplitude. Unless quiet=true, it will log an error to indicate which of the conditions failed.

source
QuantumControlBase.check_generatorMethod

Check the dynamical generator in the context of optimal control.

@test check_generator(
+)

verifies that the given ampl is a valid element in the list of amplitudes of a Generator object. This checks all the conditions of QuantumPropagators.Interfaces.check_amplitude. In addition, the following conditions must be met.

If for_gradient_optimization:

The function returns true for a valid amplitude and false for an invalid amplitude. Unless quiet=true, it will log an error to indicate which of the conditions failed.

source
QuantumControlBase.check_generatorMethod

Check the dynamical generator in the context of optimal control.

@test check_generator(
     generator; state, tlist,
     for_expval=true, for_pwc=true, for_time_continuous=false,
     for_parameterization=false, for_gradient_optimization=true,
     atol=1e-15, quiet=false
-)

verifies the given generator. This checks all the conditions of QuantumPropagators.Interfaces.check_generator. In addition, the following conditions must be met.

If for_gradient_optimization:

The function returns true for a valid generator and false for an invalid generator. Unless quiet=true, it will log an error to indicate which of the conditions failed.

source
QuantumControlBase.get_control_derivMethod
a = get_control_deriv(ampl, control)

returns the derivative $∂a_l(t)/∂ϵ_{l'}(t)$ of the given amplitude $a_l(\{ϵ_{l''}(t)\}, t)$ with respect to the given control $ϵ_{l'}(t)$. For "trivial" amplitudes, where $a_l(t) ≡ ϵ_l(t)$, the result with be either 1.0 or 0.0 (depending on whether ampl ≡ control). For non-trivial amplitudes, the result may be another amplitude that depends on the controls and potentially on time, but can be evaluated to a constant with evaluate.

source
QuantumControlBase.get_control_derivMethod

Get the derivative of the generator $G$ w.r.t. the control $ϵ(t)$.

μ  = get_control_deriv(generator, control)

returns nothing if the generator (Hamiltonian or Liouvillian) does not depend on control, or a generator

\[μ = \frac{∂G}{∂ϵ(t)}\]

otherwise. For linear control terms, μ will be a static operator, e.g. an AbstractMatrix or an Operator. For non-linear controls, μ will be time-dependent, e.g. a Generator. In either case, evaluate should be used to evaluate μ into a constant operator for particular values of the controls and a particular point in time.

For constant generators, e.g. an Operator, the result is always nothing.

source
QuantumControlBase.get_control_derivsMethod

Get a vector of the derivatives of generator w.r.t. each control.

get_control_derivs(generator, controls)

return as vector containing the derivative of generator with respect to each control in controls. The elements of the vector are either nothing if generator does not depend on that particular control, or a function μ(α) that evaluates the derivative for a particular value of the control, see get_control_deriv.

source
QuantumControlBase.init_prop_trajectoryMethod

Initialize a propagator for a given Trajectory.

propagator = init_prop_trajectory(
+)

verifies the given generator. This checks all the conditions of QuantumPropagators.Interfaces.check_generator. In addition, the following conditions must be met.

If for_gradient_optimization:

The function returns true for a valid generator and false for an invalid generator. Unless quiet=true, it will log an error to indicate which of the conditions failed.

source
QuantumControlBase.get_control_derivMethod
a = get_control_deriv(ampl, control)

returns the derivative $∂a_l(t)/∂ϵ_{l'}(t)$ of the given amplitude $a_l(\{ϵ_{l''}(t)\}, t)$ with respect to the given control $ϵ_{l'}(t)$. For "trivial" amplitudes, where $a_l(t) ≡ ϵ_l(t)$, the result with be either 1.0 or 0.0 (depending on whether ampl ≡ control). For non-trivial amplitudes, the result may be another amplitude that depends on the controls and potentially on time, but can be evaluated to a constant with evaluate.

source
QuantumControlBase.get_control_derivMethod

Get the derivative of the generator $G$ w.r.t. the control $ϵ(t)$.

μ  = get_control_deriv(generator, control)

returns nothing if the generator (Hamiltonian or Liouvillian) does not depend on control, or a generator

\[μ = \frac{∂G}{∂ϵ(t)}\]

otherwise. For linear control terms, μ will be a static operator, e.g. an AbstractMatrix or an Operator. For non-linear controls, μ will be time-dependent, e.g. a Generator. In either case, evaluate should be used to evaluate μ into a constant operator for particular values of the controls and a particular point in time.

For constant generators, e.g. an Operator, the result is always nothing.

source
QuantumControlBase.get_control_derivsMethod

Get a vector of the derivatives of generator w.r.t. each control.

get_control_derivs(generator, controls)

return as vector containing the derivative of generator with respect to each control in controls. The elements of the vector are either nothing if generator does not depend on that particular control, or a function μ(α) that evaluates the derivative for a particular value of the control, see get_control_deriv.

source
QuantumControlBase.init_prop_trajectoryMethod

Initialize a propagator for a given Trajectory.

propagator = init_prop_trajectory(
     traj,
     tlist;
     initial_state=traj.initial_state,
     kwargs...
-)

initializes a Propagator for the propagation of the initial_state under the dynamics described by traj.generator.

All keyword arguments are forwarded to QuantumPropagators.init_prop, with default values from any property of traj with a prop_ prefix. That is, the keyword arguments for the underlying QuantumPropagators.init_prop are determined as follows:

  • For any property of traj whose name starts with the prefix prop_, strip the prefix and use that property as a keyword argument for init_prop. For example, if traj.prop_method is defined, method=traj.prop_method will be passed to init_prop. Similarly, traj.prop_inplace would be passed as inplace=traj.prop_inplace, etc.
  • Any explicitly keyword argument to init_prop_trajectory overrides the values from the properties of traj.

Note that the propagation method in particular must be specified, as it is a mandatory keyword argument in QuantumPropagators.propagate). Thus, either traj must have a property prop_method of the trajectory, or method must be given as an explicit keyword argument.

source
QuantumControlBase.make_chiMethod

Return a function that calculates $|χ_k⟩ = -∂J_T/∂⟨Ψ_k|$.

chi = make_chi(
+)

initializes a Propagator for the propagation of the initial_state under the dynamics described by traj.generator.

All keyword arguments are forwarded to QuantumPropagators.init_prop, with default values from any property of traj with a prop_ prefix. That is, the keyword arguments for the underlying QuantumPropagators.init_prop are determined as follows:

  • For any property of traj whose name starts with the prefix prop_, strip the prefix and use that property as a keyword argument for init_prop. For example, if traj.prop_method is defined, method=traj.prop_method will be passed to init_prop. Similarly, traj.prop_inplace would be passed as inplace=traj.prop_inplace, etc.
  • Any explicitly keyword argument to init_prop_trajectory overrides the values from the properties of traj.

Note that the propagation method in particular must be specified, as it is a mandatory keyword argument in QuantumPropagators.propagate). Thus, either traj must have a property prop_method of the trajectory, or method must be given as an explicit keyword argument.

source
QuantumControlBase.make_chiMethod

Return a function that calculates $|χ_k⟩ = -∂J_T/∂⟨Ψ_k|$.

chi = make_chi(
     J_T,
     trajectories;
     mode=:any,
@@ -48,12 +48,12 @@
 \end{align*}\]

and the definition of the Zygote gradient with respect to a complex scalar,

\[∇_{τ_k} J_T = \left( \frac{∂ J_T}{∂ \Re[τ_k]} + i \frac{∂ J_T}{∂ \Im[τ_k]} -\right)\,.\]

Tip

In order to extend make_chi with an analytic implementation for a new J_T function, define a new method make_analytic_chi like so:

QuantumControlBase.make_analytic_chi(::typeof(J_T_sm), trajectories) = chi_sm

which links make_chi for QuantumControl.Functionals.J_T_sm to QuantumControl.Functionals.chi_sm.

Warning

Zygote is notorious for being buggy (silently returning incorrect gradients). Always test automatic derivatives against finite differences and/or other automatic differentiation frameworks.

source
QuantumControlBase.make_grad_J_aMethod

Return a function to evaluate $∂J_a/∂ϵ_{ln}$ for a pulse value running cost.

grad_J_a! = make_grad_J_a(
+\right)\,.\]

Tip

In order to extend make_chi with an analytic implementation for a new J_T function, define a new method make_analytic_chi like so:

QuantumControlBase.make_analytic_chi(::typeof(J_T_sm), trajectories) = chi_sm

which links make_chi for QuantumControl.Functionals.J_T_sm to QuantumControl.Functionals.chi_sm.

Warning

Zygote is notorious for being buggy (silently returning incorrect gradients). Always test automatic derivatives against finite differences and/or other automatic differentiation frameworks.

source
QuantumControlBase.make_grad_J_aMethod

Return a function to evaluate $∂J_a/∂ϵ_{ln}$ for a pulse value running cost.

grad_J_a! = make_grad_J_a(
     J_a,
     tlist;
     mode=:any,
     automatic=:default,
-)

returns a function so that grad_J_a!(∇J_a, pulsevals, tlist) sets $∂J_a/∂ϵ_{ln}$ as the elements of the (vectorized) ∇J_a. The function J_a must have the interface J_a(pulsevals, tlist), see, e.g., J_a_fluence.

The parameters mode and automatic are handled as in make_chi, where mode is one of :any, :analytic, :automatic, and automatic is he loaded module of an automatic differentiation framework, where :default refers to the framework set with QuantumControl.set_default_ad_framework.

Tip

In order to extend make_grad_J_a with an analytic implementation for a new J_a function, define a new method make_analytic_grad_J_a like so:

make_analytic_grad_J_a(::typeof(J_a_fluence), tlist) = grad_J_a_fluence!

which links make_grad_J_a for J_a_fluence to grad_J_a_fluence!.

source
QuantumControlBase.make_print_itersMethod

Construct a method-specific automatic callback for printing iter information.

print_iters = make_print_iters(Method; kwargs...)

constructs the automatic callback to be used by optimize(problem; method=Method, print_iters=true) to print information after each iteration. The keyword arguments are those used to instantiate problem and those explicitly passed to optimize.

Optimization methods should implement make_print_iters(::Val{:Method}; kwargs...) where :Method is the name of the module/package implementing the method.

source
QuantumControlBase.optimizeMethod

Optimize a quantum control problem.

result = optimize(
+)

returns a function so that grad_J_a!(∇J_a, pulsevals, tlist) sets $∂J_a/∂ϵ_{ln}$ as the elements of the (vectorized) ∇J_a. The function J_a must have the interface J_a(pulsevals, tlist), see, e.g., J_a_fluence.

The parameters mode and automatic are handled as in make_chi, where mode is one of :any, :analytic, :automatic, and automatic is he loaded module of an automatic differentiation framework, where :default refers to the framework set with QuantumControl.set_default_ad_framework.

Tip

In order to extend make_grad_J_a with an analytic implementation for a new J_a function, define a new method make_analytic_grad_J_a like so:

make_analytic_grad_J_a(::typeof(J_a_fluence), tlist) = grad_J_a_fluence!

which links make_grad_J_a for J_a_fluence to grad_J_a_fluence!.

source
QuantumControlBase.make_print_itersMethod

Construct a method-specific automatic callback for printing iter information.

print_iters = make_print_iters(Method; kwargs...)

constructs the automatic callback to be used by optimize(problem; method=Method, print_iters=true) to print information after each iteration. The keyword arguments are those used to instantiate problem and those explicitly passed to optimize.

Optimization methods should implement make_print_iters(::Val{:Method}; kwargs...) where :Method is the name of the module/package implementing the method.

source
QuantumControlBase.optimizeMethod

Optimize a quantum control problem.

result = optimize(
     problem;
     method,  # mandatory keyword argument
     check=true,
@@ -61,27 +61,27 @@
     print_iters=true,
     kwargs...
 )

optimizes towards a solution of given problem with the given method, which should be a Module implementing the method, e.g.,

using Krotov
-result = optimize(problem; method=Krotov)

If check is true (default), the initial_state and generator of each trajectory is checked with check_state and check_generator. Any other keyword argument temporarily overrides the corresponding keyword argument in problem. These arguments are available to the optimizer, see each optimization package's documentation for details.

The callback can be given as a function to be called after each iteration in order to analyze the progress of the optimization or to modify the state of the optimizer or the current controls. The signature of callback is method-specific, but callbacks should receive a workspace objects as the first parameter as the first argument, the iteration number as the second parameter, and then additional method-specific parameters.

The callback function may return a tuple of values, and an optimization method should store these values fore each iteration in a records field in their Result object. The callback should be called once with an iteration number of 0 before the first iteration. The callback can also be given as a tuple of vector of functions, which are automatically combined via chain_callbacks.

If print_iters is true (default), an automatic callback is created via the method-specific make_print_iters to print the progress of the optimization after each iteration. This automatic callback runs after any manually given callback.

All remaining keyword argument are method-specific. To obtain the documentation for which options a particular method uses, run, e.g.,

? optimize(problem, ::Val{:Krotov})

where :Krotov is the name of the module implementing the method. The above is also the method signature that a Module wishing to implement a control method must define.

The returned result object is specific to the optimization method.

source
QuantumControlBase.propagate_trajectoriesMethod

Propagate multiple trajectories in parallel.

result = propagate_trajectories(
+result = optimize(problem; method=Krotov)

If check is true (default), the initial_state and generator of each trajectory is checked with check_state and check_generator. Any other keyword argument temporarily overrides the corresponding keyword argument in problem. These arguments are available to the optimizer, see each optimization package's documentation for details.

The callback can be given as a function to be called after each iteration in order to analyze the progress of the optimization or to modify the state of the optimizer or the current controls. The signature of callback is method-specific, but callbacks should receive a workspace objects as the first parameter as the first argument, the iteration number as the second parameter, and then additional method-specific parameters.

The callback function may return a tuple of values, and an optimization method should store these values fore each iteration in a records field in their Result object. The callback should be called once with an iteration number of 0 before the first iteration. The callback can also be given as a tuple of vector of functions, which are automatically combined via chain_callbacks.

If print_iters is true (default), an automatic callback is created via the method-specific make_print_iters to print the progress of the optimization after each iteration. This automatic callback runs after any manually given callback.

All remaining keyword argument are method-specific. To obtain the documentation for which options a particular method uses, run, e.g.,

? optimize(problem, ::Val{:Krotov})

where :Krotov is the name of the module implementing the method. The above is also the method signature that a Module wishing to implement a control method must define.

The returned result object is specific to the optimization method.

source
QuantumControlBase.propagate_trajectoriesMethod

Propagate multiple trajectories in parallel.

result = propagate_trajectories(
     trajectories, tlist; use_threads=true, kwargs...
-)

runs propagate_trajectory for every trajectory in trajectories, collects and returns a vector of results. The propagation happens in parallel if use_threads=true (default). All keyword parameters are passed to propagate_trajectory, except that if initial_state is given, it must be a vector of initial states, one for each trajectory. Likewise, to pass pre-allocated storage arrays to storage, a vector of storage arrays must be passed. A simple storage=true will still work to return a vector of storage results.

source
QuantumControlBase.propagate_trajectoryMethod

Propagate a Trajectory.

propagate_trajectory(
+)

runs propagate_trajectory for every trajectory in trajectories, collects and returns a vector of results. The propagation happens in parallel if use_threads=true (default). All keyword parameters are passed to propagate_trajectory, except that if initial_state is given, it must be a vector of initial states, one for each trajectory. Likewise, to pass pre-allocated storage arrays to storage, a vector of storage arrays must be passed. A simple storage=true will still work to return a vector of storage results.

source
QuantumControlBase.propagate_trajectoryMethod

Propagate a Trajectory.

propagate_trajectory(
     traj,
     tlist;
     initial_state=traj.initial_state,
     kwargs...
-)

propagates initial_state under the dynamics described by traj.generator. It takes the same keyword arguments as QuantumPropagators.propagate, with default values from any property of traj with a prop_ prefix (prop_method, prop_inplace, prop_callback, …). See init_prop_trajectory for details.

Note that method (a mandatory keyword argument in QuantumPropagators.propagate) must be specified, either as a property prop_method of the trajectory, or by passing a method keyword argument explicitly.

source
QuantumControlBase.set_atexit_save_optimizationMethod

Register a callback to dump a running optimization to disk on unexpected exit.

A long-running optimization routine may use

if !isnothing(atexit_filename)
+)

propagates initial_state under the dynamics described by traj.generator. It takes the same keyword arguments as QuantumPropagators.propagate, with default values from any property of traj with a prop_ prefix (prop_method, prop_inplace, prop_callback, …). See init_prop_trajectory for details.

Note that method (a mandatory keyword argument in QuantumPropagators.propagate) must be specified, either as a property prop_method of the trajectory, or by passing a method keyword argument explicitly.

source
QuantumControlBase.set_atexit_save_optimizationMethod

Register a callback to dump a running optimization to disk on unexpected exit.

A long-running optimization routine may use

if !isnothing(atexit_filename)
     set_atexit_save_optimization(
         atexit_filename, result; msg_property=:message, msg="Abort: ATEXIT"
     )
     # ...
     popfirst!(Base.atexit_hooks)  # remove callback
-end

to register a callback that writes the given result object to the given filename in JLD2 format in the event that the program terminates unexpectedly. The idea is to avoid data loss if the user presses CTRL-C in a non-interactive program (SIGINT), or if the process receives a SIGTERM from an HPC scheduler because the process has reached its allocated runtime limit. Note that the callback cannot protect against data loss in all possible scenarios, e.g., a SIGKILL will terminate the program without giving the callback a chance to run (as will yanking the power cord).

As in the above example, the optimization routine should make set_atexit_save_optimization conditional on an atexit_filename keyword argument, which is what QuantumControl.@optimize_or_load will pass to the optimization routine. The optimization routine must remove the callback from Base.atexit_hooks when it exits normally. Note that in an interactive context, CTRL-C will throw an InterruptException, but not cause a shutdown. Optimization routines that want to prevent data loss in this situation should handle the InterruptException and return result, in addition to using set_atexit_save_optimization.

If msg_property is not nothing, the given msg string will be stored in the corresponding property of the (mutable) result object before it is written out.

The resulting JLD2 file is compatible with QuantumControl.load_optimization.

source
QuantumControlBase.taus!Method

Overlaps of target states with propagates states, calculated in-place.

taus!(τ, Ψ, trajectories; ignore_missing_target_state=false)

overwrites the complex vector τ with the results of taus(Ψ, trajectories).

Throws an ArgumentError if any of trajectories have a target_state of nothing. If ignore_missing_target_state=true, values in τ instead will remain unchanged for any trajectories with a missing target state.

source
QuantumControlBase.tausMethod

Overlaps of target states with propagates states

τ = taus(Ψ, trajectories)

calculates a vector of values $τ_k = ⟨Ψ_k^{tgt}|Ψ_k⟩$ where $|Ψ_k^{tgt}⟩$ is the traj.target_state of the $k$'th element of trajectories and $|Ψₖ⟩$ is the $k$'th element of Ψ.

The definition of the τ values with $Ψ_k^{tgt}$ on the left (overlap of target states with propagated states, as opposed to overlap of propagated states with target states) matches Refs. [4] and [7].

The function requires that each trajectory defines a target state. See also taus! for an in-place version that includes well-defined error handling for any trajectories whose target_state property is nothing.

source
QuantumPropagators.Controls.get_controlsMethod
controls = get_controls(problem)

extracts the controls from problem.trajectories.

source
QuantumPropagators.Controls.get_controlsMethod
controls = get_controls(trajectories)

extracts the controls from a list of trajectories (i.e., from each trajectory's generator). Controls that occur multiple times in the different trajectories will occur only once in the result.

source
QuantumPropagators.Controls.get_parametersMethod
parameters = get_parameters(problem)

extracts the parameters from problem.trajectories.

source
QuantumPropagators.Controls.get_parametersMethod
parameters = get_parameters(trajectories)

collects and combines get parameter arrays from all the generators in trajectories. Note that this allows any custom generator type to define a custom get_parameters method to override the default of obtaining the parameters recursively from the controls inside the generator.

source
QuantumPropagators.Controls.substituteMethod
problem = substitute(problem::ControlProblem, replacements)

substitutes in problem.trajectories

source
QuantumPropagators.Controls.substituteMethod
trajectory = substitute(trajectory::Trajectory, replacements)
-trajectories = substitute(trajectories::Vector{<:Trajectory}, replacements)

recursively substitutes the initial_state, generator, and target_state.

source
QuantumControlBase.@threadsifMacro

Conditionally apply multi-threading to for loops.

This is a variation on Base.Threads.@threads that adds a run-time boolean flag to enable or disable threading. It is intended for internal use in packages building on QuantumControlBase.

Usage:

using QuantumControlBase: @threadsif
+end

to register a callback that writes the given result object to the given filename in JLD2 format in the event that the program terminates unexpectedly. The idea is to avoid data loss if the user presses CTRL-C in a non-interactive program (SIGINT), or if the process receives a SIGTERM from an HPC scheduler because the process has reached its allocated runtime limit. Note that the callback cannot protect against data loss in all possible scenarios, e.g., a SIGKILL will terminate the program without giving the callback a chance to run (as will yanking the power cord).

As in the above example, the optimization routine should make set_atexit_save_optimization conditional on an atexit_filename keyword argument, which is what QuantumControl.@optimize_or_load will pass to the optimization routine. The optimization routine must remove the callback from Base.atexit_hooks when it exits normally. Note that in an interactive context, CTRL-C will throw an InterruptException, but not cause a shutdown. Optimization routines that want to prevent data loss in this situation should handle the InterruptException and return result, in addition to using set_atexit_save_optimization.

If msg_property is not nothing, the given msg string will be stored in the corresponding property of the (mutable) result object before it is written out.

The resulting JLD2 file is compatible with QuantumControl.load_optimization.

source
QuantumControlBase.taus!Method

Overlaps of target states with propagates states, calculated in-place.

taus!(τ, Ψ, trajectories; ignore_missing_target_state=false)

overwrites the complex vector τ with the results of taus(Ψ, trajectories).

Throws an ArgumentError if any of trajectories have a target_state of nothing. If ignore_missing_target_state=true, values in τ instead will remain unchanged for any trajectories with a missing target state.

source
QuantumControlBase.tausMethod

Overlaps of target states with propagates states

τ = taus(Ψ, trajectories)

calculates a vector of values $τ_k = ⟨Ψ_k^{tgt}|Ψ_k⟩$ where $|Ψ_k^{tgt}⟩$ is the traj.target_state of the $k$'th element of trajectories and $|Ψₖ⟩$ is the $k$'th element of Ψ.

The definition of the τ values with $Ψ_k^{tgt}$ on the left (overlap of target states with propagated states, as opposed to overlap of propagated states with target states) matches Refs. [4] and [7].

The function requires that each trajectory defines a target state. See also taus! for an in-place version that includes well-defined error handling for any trajectories whose target_state property is nothing.

source
QuantumPropagators.Controls.get_controlsMethod
controls = get_controls(problem)

extracts the controls from problem.trajectories.

source
QuantumPropagators.Controls.get_controlsMethod
controls = get_controls(trajectories)

extracts the controls from a list of trajectories (i.e., from each trajectory's generator). Controls that occur multiple times in the different trajectories will occur only once in the result.

source
QuantumPropagators.Controls.get_parametersMethod
parameters = get_parameters(problem)

extracts the parameters from problem.trajectories.

source
QuantumPropagators.Controls.get_parametersMethod
parameters = get_parameters(trajectories)

collects and combines get parameter arrays from all the generators in trajectories. Note that this allows any custom generator type to define a custom get_parameters method to override the default of obtaining the parameters recursively from the controls inside the generator.

source
QuantumPropagators.Controls.substituteMethod
problem = substitute(problem::ControlProblem, replacements)

substitutes in problem.trajectories

source
QuantumPropagators.Controls.substituteMethod
trajectory = substitute(trajectory::Trajectory, replacements)
+trajectories = substitute(trajectories::Vector{<:Trajectory}, replacements)

recursively substitutes the initial_state, generator, and target_state.

source
QuantumControlBase.@threadsifMacro

Conditionally apply multi-threading to for loops.

This is a variation on Base.Threads.@threads that adds a run-time boolean flag to enable or disable threading. It is intended for internal use in packages building on QuantumControlBase.

Usage:

using QuantumControlBase: @threadsif
 
 function optimize(trajectories; use_threads=true)
     @threadsif use_threads for k = 1:length(trajectories)
     # ...
     end
-end
source
QuantumPropagators.AbstractPropagatorType

Abstract base type for all Propagator objects.

All Propagator objects must be instantiated via init_prop and implement the following interface.

Properties

  • state (read-only): The current quantum state in the propagation
  • tlist (read-only): The time grid for the propagation
  • t (read-only): The time at which state is defined. An element of tlist.
  • parameters: parameters that determine the dynamics. The structure of the parameters depends on the concrete Propagator type (i.e., the propagation method). Mutating the parameters affects subsequent propagation steps.
  • backward: Boolean flag to indicate whether the propagation moves forward or backward in time
  • inplace: Boolean flag to indicate whether propagator.state is modified in-place or is recreated by every call of prop_step! or set_state!. With inplace=false, the propagator should generally avoid in-place operations, such as calls to QuantumPropagators.Controls.evaluate!.

Concrete Propagator types may have additional properties or fields, but these should be considered private.

Methods

  • reinit_prop! — reset the propagator to a new initial state at the beginning of the time grid (or the end, for backward propagation)
  • prop_step! – advance the propagator by one step forward (or backward) on the time grid.
  • set_state! — safely mutate the current quantum state of the propagation. Note that directly mutating the state property is not safe. However, Ψ = propagator.state; foo_mutate!(Ψ), set_state!(propagator, Ψ) for some mutating function foo_mutate! is guaranteed to be safe and efficient for both in-place and not-in-place propagators.
  • set_t! — safely mutate the current time (propagator.t), snapping to the values of tlist.

See also

source
QuantumPropagators.ChebyPropagatorType

Propagator for Chebychev propagation (method=QuantumPropagators.Cheby).

This is a PWCPropagator.

source
QuantumPropagators.ExpPropagatorType

Propagator for propagation via direct exponentiation (method=QuantumPropagators.ExpProp)

This is a PWCPropagator.

source
QuantumPropagators.NewtonPropagatorType

Propagator for Newton propagation (method=QuantumPropagators.Newton).

This is a PWCPropagator.

source
QuantumPropagators.PWCPropagatorType

PiecewisePropagator sub-type for piecewise-constant propagators.

Like the more general PiecewisePropagator, this is characterized by propagator.parameters mapping the controls in the generator to a vector of amplitude value on the midpoints of the time grid intervals.

The propagation will use these values as constant within each interval.

source
QuantumPropagators.PiecewisePropagatorType

AbstractPropagator sub-type for piecewise propagators.

A piecewise propagator is determined by a single parameter per control and time grid interval. Consequently, the propagator.parameters are a dictionary mapping the controls found in the generator via get_controls to a vector of values defined on the intervals of the time grid, see discretize_on_midpoints. This does not necessarily imply that these values are the piecewise-constant amplitudes for the intervals. A general piecewise propagator might use interpolation to obtain actual amplitudes within any given time interval.

When the amplitudes are piecewise-constant, the propagator should be a concrete instantiation of a PWCPropagator.

source
QuantumPropagators.PropagationType

Wrapper around the parameters of a call to propagate.

Propagation(
+end
source
QuantumPropagators.AbstractPropagatorType

Abstract base type for all Propagator objects.

All Propagator objects must be instantiated via init_prop and implement the following interface.

Properties

  • state (read-only): The current quantum state in the propagation
  • tlist (read-only): The time grid for the propagation
  • t (read-only): The time at which state is defined. An element of tlist.
  • parameters: parameters that determine the dynamics. The structure of the parameters depends on the concrete Propagator type (i.e., the propagation method). Mutating the parameters affects subsequent propagation steps.
  • backward: Boolean flag to indicate whether the propagation moves forward or backward in time
  • inplace: Boolean flag to indicate whether propagator.state is modified in-place or is recreated by every call of prop_step! or set_state!. With inplace=false, the propagator should generally avoid in-place operations, such as calls to QuantumPropagators.Controls.evaluate!.

Concrete Propagator types may have additional properties or fields, but these should be considered private.

Methods

  • reinit_prop! — reset the propagator to a new initial state at the beginning of the time grid (or the end, for backward propagation)
  • prop_step! – advance the propagator by one step forward (or backward) on the time grid.
  • set_state! — safely mutate the current quantum state of the propagation. Note that directly mutating the state property is not safe. However, Ψ = propagator.state; foo_mutate!(Ψ), set_state!(propagator, Ψ) for some mutating function foo_mutate! is guaranteed to be safe and efficient for both in-place and not-in-place propagators.
  • set_t! — safely mutate the current time (propagator.t), snapping to the values of tlist.

See also

source
QuantumPropagators.ChebyPropagatorType

Propagator for Chebychev propagation (method=QuantumPropagators.Cheby).

This is a PWCPropagator.

source
QuantumPropagators.ExpPropagatorType

Propagator for propagation via direct exponentiation (method=QuantumPropagators.ExpProp)

This is a PWCPropagator.

source
QuantumPropagators.NewtonPropagatorType

Propagator for Newton propagation (method=QuantumPropagators.Newton).

This is a PWCPropagator.

source
QuantumPropagators.PWCPropagatorType

PiecewisePropagator sub-type for piecewise-constant propagators.

Like the more general PiecewisePropagator, this is characterized by propagator.parameters mapping the controls in the generator to a vector of amplitude value on the midpoints of the time grid intervals.

The propagation will use these values as constant within each interval.

source
QuantumPropagators.PiecewisePropagatorType

AbstractPropagator sub-type for piecewise propagators.

A piecewise propagator is determined by a single parameter per control and time grid interval. Consequently, the propagator.parameters are a dictionary mapping the controls found in the generator via get_controls to a vector of values defined on the intervals of the time grid, see discretize_on_midpoints. This does not necessarily imply that these values are the piecewise-constant amplitudes for the intervals. A general piecewise propagator might use interpolation to obtain actual amplitudes within any given time interval.

When the amplitudes are piecewise-constant, the propagator should be a concrete instantiation of a PWCPropagator.

source
QuantumPropagators.PropagationType

Wrapper around the parameters of a call to propagate.

Propagation(
     generator, tlist;
     pre_propagation=nothing, post_propagation=nothing,
     kwargs...
@@ -91,9 +91,9 @@
     propagator;
     pre_propagation=nothing, post_propagation=nothing,
     kwargs...
-)

is a wrapper around the arguments for propagate / init_prop, for use within propagate_sequence.

The positional and keyword arguments are those accepted by the above propagation routines, excluding the initial state. A Propagation may in addition include the pre_propagation and post_propagation keyword arguments recognized by propagate_sequence.

source
QuantumPropagators.cheby_get_spectral_envelopeMethod

Determine the spectral envelope of a generator.

E_min, E_max = cheby_get_spectral_envelope(
+)

is a wrapper around the arguments for propagate / init_prop, for use within propagate_sequence.

The positional and keyword arguments are those accepted by the above propagation routines, excluding the initial state. A Propagation may in addition include the pre_propagation and post_propagation keyword arguments recognized by propagate_sequence.

source
QuantumPropagators.cheby_get_spectral_envelopeMethod

Determine the spectral envelope of a generator.

E_min, E_max = cheby_get_spectral_envelope(
     generator, tlist, control_ranges, method; kwargs...
-)

estimates a lower bound E_min the lowest eigenvalue of the generator for any values of the controls specified by control_ranges, and an upper bound E_max for the highest eigenvalue.

This is done by constructing operators from the extremal values for the controls as specified in control_ranges and taking the smallest/largest return values from specrange for those operators.

Arguments

  • generator: dynamical generator, e.g. a time-dependent
  • tlist: The time grid for the propagation
  • control_ranges: a dict that maps controls that occur in generator (cf. get_controls to a tuple of minimum and maximum amplitude for that control
  • method: method name to pass to specrange
  • kwargs: Any remaining keyword arguments are passed to specrange
source
QuantumPropagators.disable_timingsMethod

Disable the collection of TimerOutputs data.

QuantumPropagators.disable_timings()

disables the collection of timing data previously enabled with enable_timings. This triggers recompilation to completely remove profiling from the code. That is, there is zero cost when the collection of timing data is disabled.

Returns QuantumPropagators.timings_enabled(), i.e., false if successful.

source
QuantumPropagators.enable_timingsMethod

Enable the collection of TimerOutputs data.

QuantumPropagators.enable_timings()

enables certain portions of the package to collect TimerOutputs internally. This aids in profiling and benchmarking propagation methods.

Specifically, after enable_timings(), for any ChebyPropagator or NewtonPropagator, timing data will become available in propagator.wrk.timing_data (as a TimerOutput instance). This data is reset when the propagator is re-instantiated with init_prop or re-initialized with reinit_prop!. This makes the data local to any call of propagate.

Note that enable_timings() triggers recompilation, so propagate should be called at least twice to avoid compilation overhead in the timing data. There is still a small overhead for collecting the timing data.

The collection of timing data can be disabled again with disable_timings.

Returns QuantumPropagators.timings_enabled(), i.e., true if successful.

source
QuantumPropagators.init_propMethod
using QuantumPropagators: Cheby
+)

estimates a lower bound E_min the lowest eigenvalue of the generator for any values of the controls specified by control_ranges, and an upper bound E_max for the highest eigenvalue.

This is done by constructing operators from the extremal values for the controls as specified in control_ranges and taking the smallest/largest return values from specrange for those operators.

Arguments

  • generator: dynamical generator, e.g. a time-dependent
  • tlist: The time grid for the propagation
  • control_ranges: a dict that maps controls that occur in generator (cf. get_controls to a tuple of minimum and maximum amplitude for that control
  • method: method name to pass to specrange
  • kwargs: Any remaining keyword arguments are passed to specrange
source
QuantumPropagators.disable_timingsMethod

Disable the collection of TimerOutputs data.

QuantumPropagators.disable_timings()

disables the collection of timing data previously enabled with enable_timings. This triggers recompilation to completely remove profiling from the code. That is, there is zero cost when the collection of timing data is disabled.

Returns QuantumPropagators.timings_enabled(), i.e., false if successful.

source
QuantumPropagators.enable_timingsMethod

Enable the collection of TimerOutputs data.

QuantumPropagators.enable_timings()

enables certain portions of the package to collect TimerOutputs internally. This aids in profiling and benchmarking propagation methods.

Specifically, after enable_timings(), for any ChebyPropagator or NewtonPropagator, timing data will become available in propagator.wrk.timing_data (as a TimerOutput instance). This data is reset when the propagator is re-instantiated with init_prop or re-initialized with reinit_prop!. This makes the data local to any call of propagate.

Note that enable_timings() triggers recompilation, so propagate should be called at least twice to avoid compilation overhead in the timing data. There is still a small overhead for collecting the timing data.

The collection of timing data can be disabled again with disable_timings.

Returns QuantumPropagators.timings_enabled(), i.e., true if successful.

source
QuantumPropagators.init_propMethod
using QuantumPropagators: Cheby
 
 cheby_propagator = init_prop(
     state,
@@ -110,7 +110,7 @@
     cheby_coeffs_limit=1e-12,
     check_normalization=false,
     specrange_kwargs...
-)

initializes a ChebyPropagator.

Method-specific keyword arguments

  • control_ranges: a dict the maps the controls in generator (see get_controls) to a tuple of min/max values. The Chebychev coefficients will be calculated based on a spectral envelope that assumes that each control can take arbitrary values within the min/max range. If not given, the ranges are determined automatically. Specifying manual control ranges can be useful when the the control amplitudes (parameters) may change during the propagation, e.g. in a sequential-update control scheme.
  • specrange_method: Method to pass to the specrange function
  • specrange_buffer: An additional factor by which to enlarge the estimated spectral range returned by specrange, in order to ensure that Chebychev coefficients are based on an overestimation of the spectral range.
  • cheby_coeffs_limit: The maximum magnitude of Chebychev coefficients that should be treated as non-zero
  • check_normalization: Check whether the Hamiltonian has been properly normalized, i.e., that the spectral range of generator has not been underestimated. This slowes down the propagation, but is advisable for novel generators.
  • uniform_dt_tolerance=1e-12: How much the intervals of tlist are allowed to vary while still being considered constant.
  • specrange_kwargs: All further keyword arguments are passed to the specrange function. Most notably, with the default specrange_method=:auto (or specrange_method=:manual), passing E_min and E_max allows to manually specify the spectral range of generator.
source
QuantumPropagators.init_propMethod
using QuantumPropagators: ExpProp
+)

initializes a ChebyPropagator.

Method-specific keyword arguments

  • control_ranges: a dict the maps the controls in generator (see get_controls) to a tuple of min/max values. The Chebychev coefficients will be calculated based on a spectral envelope that assumes that each control can take arbitrary values within the min/max range. If not given, the ranges are determined automatically. Specifying manual control ranges can be useful when the the control amplitudes (parameters) may change during the propagation, e.g. in a sequential-update control scheme.
  • specrange_method: Method to pass to the specrange function
  • specrange_buffer: An additional factor by which to enlarge the estimated spectral range returned by specrange, in order to ensure that Chebychev coefficients are based on an overestimation of the spectral range.
  • cheby_coeffs_limit: The maximum magnitude of Chebychev coefficients that should be treated as non-zero
  • check_normalization: Check whether the Hamiltonian has been properly normalized, i.e., that the spectral range of generator has not been underestimated. This slowes down the propagation, but is advisable for novel generators.
  • uniform_dt_tolerance=1e-12: How much the intervals of tlist are allowed to vary while still being considered constant.
  • specrange_kwargs: All further keyword arguments are passed to the specrange function. Most notably, with the default specrange_method=:auto (or specrange_method=:manual), passing E_min and E_max allows to manually specify the spectral range of generator.
source
QuantumPropagators.init_propMethod
using QuantumPropagators: ExpProp
 
 exp_propagator = init_prop(
     state,
@@ -125,7 +125,7 @@
     convert_state=_exp_prop_convert_state(state),
     convert_operator=_exp_prop_convert_operator(generator),
     _...
-)

initializes an ExpPropagator.

Method-specific keyword arguments

  • func: The function to evaluate. The argument H_dt is obtained by constructing an operator H from generator via the evaluate function and the multiplied with the time step dt for the current time interval. The propagation then simply multiplies the return value of func with the current state
  • convert_state: Type to which to temporarily convert states before multiplying the return value of func.
  • convert_operator: Type to which to convert the operator H before multiplying it with dt and plugging the result into func

The convert_state and convert_operator parameters are useful for when the generator and or state are unusual data structures for which the relevant methods to calculate func are not defined. Often, it is easier to temporarily convert them to standard complex matrices and vectors than to implement the missing methods.

source
QuantumPropagators.init_propMethod
using QuantumPropagators: Newton
+)

initializes an ExpPropagator.

Method-specific keyword arguments

  • func: The function to evaluate. The argument H_dt is obtained by constructing an operator H from generator via the evaluate function and the multiplied with the time step dt for the current time interval. The propagation then simply multiplies the return value of func with the current state
  • convert_state: Type to which to temporarily convert states before multiplying the return value of func.
  • convert_operator: Type to which to convert the operator H before multiplying it with dt and plugging the result into func

The convert_state and convert_operator parameters are useful for when the generator and or state are unusual data structures for which the relevant methods to calculate func are not defined. Often, it is easier to temporarily convert them to standard complex matrices and vectors than to implement the missing methods.

source
QuantumPropagators.init_propMethod
using QuantumPropagators: Newton
 
 newton_propagator = init_prop(
     state,
@@ -142,7 +142,7 @@
     relerr=1e-12,
     max_restarts=50,
     _...
-)

initializes a NewtonPropagator.

Method-specific keyword arguments

  • m_max: maximum Krylov dimension, cf. NewtonWrk
  • func, norm_min, relerr, max_restarts: parameter to pass to newton!
source
QuantumPropagators.init_propMethod

Initialize a Propagator.

propagator = init_prop(
+)

initializes a NewtonPropagator.

Method-specific keyword arguments

  • m_max: maximum Krylov dimension, cf. NewtonWrk
  • func, norm_min, relerr, max_restarts: parameter to pass to newton!
source
QuantumPropagators.init_propMethod

Initialize a Propagator.

propagator = init_prop(
     state, generator, tlist;
     method,  # mandatory keyword argument
     backward=false,
@@ -150,10 +150,10 @@
     piecewise=nothing,
     pwc=nothing,
     kwargs...
-)

initializes a propagator for the time propagation of the given state over a time grid tlist under the time-dependent generator (Hamiltonian/Liouvillian) generator.

Arguments

  • state: The "initial" state for the propagation. For backward=false, this state is taken to be at initial time (tlist[begin]); and for backward=true, at the final time (tlist[end])
  • generator: The time-dependent generator of the dynamics
  • tlist: The time grid over which which the propagation is defined. This may or may not be equidistant.

Mandatory keyword arguments

  • method: The propagation method to use. May be given as a name (Symbol), but the recommended usage is to pass a module implementing the propagation method, e.g., using QuantumPropagators: Cheby; method = Cheby. Passing a module ensures that the code implementing the method is correctly loaded. This is particularly important for propagators using third-party backends, like with method=OrdinaryDiffEq.

Optional keyword arguments

  • backward: If true, initialize the propagator for a backward propagation. The resulting propagator.t will be tlist[end], and subsequent calls to prop_step! will move backward on tlist.
  • inplace: If true, the state property of the resulting propagator will be changed in-place by any call to prop_step!. If false, each call to prop_step! changes the reference for propagator.state, and the propagation will not use any in-place operations. Not all propagation methods may support both in-place and not-in-place propagation. In-place propagation is generally more efficient for larger Hilbert space dimensions, but may not be compatible, e.g., with automatic differentiation.
  • piecewise: If given as a boolean, true enforces that the resulting propagator is a PiecewisePropagator, and false enforces that it not a PiecewisePropagator. For the default piecewise=nothing, whatever type of propagation is the default for the given method will be used. Throw an error if the given method does not support the required type of propagation.
  • pwc: Like piecewise, but for the stronger PWCPropagator.

All other kwargs are method-dependent and are ignored for methods that do not support them.

The type of the returned propagator is a sub-type of AbstractPropagator, respectively a sub-type of PiecewisePropagator if piecewise=true or a sub-type of PWCPropagator if pwc=true.

Internals

Internally, the (mandatory) keyword method is converted into a fourth positional argument. This allows propagation methods to define their own implementation of init_prop via multiple dispatch. However, when calling init_prop in high-level code, method must always be given as a keyword argument.

See also

source
QuantumPropagators.ode_functionMethod

Wrap around a Generator, for use as an ODE function.

f = ode_function(generator, tlist; c=-1im)

creates a function suitable to be passed to ODEProblem.

\[\gdef\op#1{\hat{#1}} +)

initializes a propagator for the time propagation of the given state over a time grid tlist under the time-dependent generator (Hamiltonian/Liouvillian) generator.

Arguments

  • state: The "initial" state for the propagation. For backward=false, this state is taken to be at initial time (tlist[begin]); and for backward=true, at the final time (tlist[end])
  • generator: The time-dependent generator of the dynamics
  • tlist: The time grid over which which the propagation is defined. This may or may not be equidistant.

Mandatory keyword arguments

  • method: The propagation method to use. May be given as a name (Symbol), but the recommended usage is to pass a module implementing the propagation method, e.g., using QuantumPropagators: Cheby; method = Cheby. Passing a module ensures that the code implementing the method is correctly loaded. This is particularly important for propagators using third-party backends, like with method=OrdinaryDiffEq.

Optional keyword arguments

  • backward: If true, initialize the propagator for a backward propagation. The resulting propagator.t will be tlist[end], and subsequent calls to prop_step! will move backward on tlist.
  • inplace: If true, the state property of the resulting propagator will be changed in-place by any call to prop_step!. If false, each call to prop_step! changes the reference for propagator.state, and the propagation will not use any in-place operations. Not all propagation methods may support both in-place and not-in-place propagation. In-place propagation is generally more efficient for larger Hilbert space dimensions, but may not be compatible, e.g., with automatic differentiation.
  • piecewise: If given as a boolean, true enforces that the resulting propagator is a PiecewisePropagator, and false enforces that it not a PiecewisePropagator. For the default piecewise=nothing, whatever type of propagation is the default for the given method will be used. Throw an error if the given method does not support the required type of propagation.
  • pwc: Like piecewise, but for the stronger PWCPropagator.

All other kwargs are method-dependent and are ignored for methods that do not support them.

The type of the returned propagator is a sub-type of AbstractPropagator, respectively a sub-type of PiecewisePropagator if piecewise=true or a sub-type of PWCPropagator if pwc=true.

Internals

Internally, the (mandatory) keyword method is converted into a fourth positional argument. This allows propagation methods to define their own implementation of init_prop via multiple dispatch. However, when calling init_prop in high-level code, method must always be given as a keyword argument.

See also

source
QuantumPropagators.ode_functionMethod

Wrap around a Generator, for use as an ODE function.

f = ode_function(generator, tlist; c=-1im)

creates a function suitable to be passed to ODEProblem.

\[\gdef\op#1{\hat{#1}} \gdef\ket#1{\vert{#1}\rangle}\]

With generator corresponding to $\op{H}(t)$, this implicitly encodes the ODE

\[\frac{\partial}{\partial t} \ket{\Psi(t)} = c \op{H}(t) \ket{\Psi(t)}\]

for the state $\ket{\Psi(t)}$. With the default $c = -i$, this corresponds to the Schrödinger equation, or the Liouville equation with convention=:LvN.

The resulting f works both in-place and not-in-place, as

f(ϕ, Ψ, vals_dict, t)   # in-place `f(du, u, p, t)`
 ϕ = f(Ψ, vals_dict, t)  # not-in-place `f(u, p, t)`

Calling f as above is functionally equivalent to calling evaluate to obtain an operator H from the original time-dependent generator, and then applying H to the current quantum state Ψ:

H = evaluate(f.generator, t; vals_dict=vals_dict)
-ϕ = c * H * Ψ

where vals_dict may be a dictionary mapping controls to values (set as the parameters p of the underlying ODE solver).

If QuantumPropagators.enable_timings() has been called, profiling data is collected in f.timing_data.

source
QuantumPropagators.prop_step!Function

Advance the propagator by a single time step.

state = prop_step!(propagator)

returns the state obtained from propagating to the next point on the time grid from propagator.t, respectively the previous point if propagator.backward is true.

When the propagation would lead out of the time grid, prop_step! leaves propagator unchanged and returns nothing. Thus, a return value of nothing may be used to signal that a propagation has completed.

source
QuantumPropagators.propagateMethod

Propagate a state over an entire time grid.

state = propagate(
+ϕ = c * H * Ψ

where vals_dict may be a dictionary mapping controls to values (set as the parameters p of the underlying ODE solver).

If QuantumPropagators.enable_timings() has been called, profiling data is collected in f.timing_data.

source
QuantumPropagators.prop_step!Function

Advance the propagator by a single time step.

state = prop_step!(propagator)

returns the state obtained from propagating to the next point on the time grid from propagator.t, respectively the previous point if propagator.backward is true.

When the propagation would lead out of the time grid, prop_step! leaves propagator unchanged and returns nothing. Thus, a return value of nothing may be used to signal that a propagation has completed.

source
QuantumPropagators.propagateMethod

Propagate a state over an entire time grid.

state = propagate(
     state,
     generator,
     tlist;
@@ -172,7 +172,7 @@
 write_to_storage!(storage, i, data)

is executed, where state is defined at time tlist[i]. See map_observables and write_to_storage! for details. The default values for observables results simply in the propagated states at every point in time being stored.

The storage parameter may also be given as true, and a new storage array will be created internally with init_storage and returned instead of the propagated state:

data = propagate(
     state, generator, tlist; method,
     backward=false; storage=true, observables=observables,
-    callback=nothing, show_progress=false, init_prop_kwargs...)

If backward is true, the input state is assumed to be at time tlist[end], and the propagation progresses backward in time (with a negative time step dt). If storage is given, it will be filled back-to-front during the backward propagation.

If callback is given as a callable, it will be called after each propagation step, as callback(propagator, observables) where propagator is Propagator object driving the propagation. The callback is called before calculating any observables. Example usage includes writing data to file, or modifying state via set_state!, e.g., removing amplitude from the lowest and highest level to mitigate "truncation error".

If show_progress is given as true, a progress bar will be shown for long-running propagation. In order to customize the progress bar, show_progress may also be a function that receives length(tlist) and returns a ProgressMeter.Progress instance.

If in_place=false is given, the propagation avoids in-place operations. This is slower than inplace=true, but is often required in the context of automatic differentiation (AD), e.g., with Zygote. That is, use in_place=false if propagate is called inside a function to be passed to Zygote.gradient, Zygote.pullback, or a similar function. In an AD context, storage and show_progress should not be used.

The propagate routine returns the propagated state at tlist[end], respectively tlist[1] if backward=true, or a storage array with the stored states / observable data if storage=true.

source
QuantumPropagators.propagateMethod
state = propagate(
+    callback=nothing, show_progress=false, init_prop_kwargs...)

If backward is true, the input state is assumed to be at time tlist[end], and the propagation progresses backward in time (with a negative time step dt). If storage is given, it will be filled back-to-front during the backward propagation.

If callback is given as a callable, it will be called after each propagation step, as callback(propagator, observables) where propagator is Propagator object driving the propagation. The callback is called before calculating any observables. Example usage includes writing data to file, or modifying state via set_state!, e.g., removing amplitude from the lowest and highest level to mitigate "truncation error".

If show_progress is given as true, a progress bar will be shown for long-running propagation. In order to customize the progress bar, show_progress may also be a function that receives length(tlist) and returns a ProgressMeter.Progress instance.

If in_place=false is given, the propagation avoids in-place operations. This is slower than inplace=true, but is often required in the context of automatic differentiation (AD), e.g., with Zygote. That is, use in_place=false if propagate is called inside a function to be passed to Zygote.gradient, Zygote.pullback, or a similar function. In an AD context, storage and show_progress should not be used.

The propagate routine returns the propagated state at tlist[end], respectively tlist[1] if backward=true, or a storage array with the stored states / observable data if storage=true.

source
QuantumPropagators.propagateMethod
state = propagate(
     state,
     propagator;
     storage=nothing,
@@ -180,20 +180,20 @@
     show_progress=false,
     callback=nothing,
     reinit_prop_kwargs...
-)

re-initializes the given propagator with state (see reinit_prop!) and then calls the lower-level propagate(propagator; ...).

source
QuantumPropagators.propagateMethod
state = propagate(
+)

re-initializes the given propagator with state (see reinit_prop!) and then calls the lower-level propagate(propagator; ...).

source
QuantumPropagators.propagateMethod
state = propagate(
     propagator;
     storage=nothing,
     observables=<store state>,
     show_progress=false,
     callback=nothing,
-)

propagates a freshly initialized propagator (immediately after init_prop). Used in the higher-level propagate(state, generator, tlist; kwargs...).

source
QuantumPropagators.propagate_sequenceMethod

Propagate a state through a sequence of generators.

states = propagate_sequence(
+)

propagates a freshly initialized propagator (immediately after init_prop). Used in the higher-level propagate(state, generator, tlist; kwargs...).

source
QuantumPropagators.propagate_sequenceMethod

Propagate a state through a sequence of generators.

states = propagate_sequence(
     state,
     propagations;
     storage=nothing,
     pre_propagation=nothing,
     post_propagation=nothing,
     kwargs...
-)

takes an initial state and performs a sequence of propagate calls using the parameters in propagations. The initial state for each step in the sequence is the state resulting from the previous step. Optionally, before and after each step, a pre_propagation and post_propagation function may modify the state instantaneously, e.g., to perform a frame transformation. Return the vector of states at the end of each step (after any post_propagation, before any next pre_propagation of the next step).

Arguments

  • state: The initial state
  • propagations: A vector of Propagation instances, one per step in the sequence, each containing the arguments for the call to propagate for that step. The Propagation contains the generator and time grid for each step as positional parameters, or alternatively a pre-initialized Propagator, and any keyword arguments for propagate that are specific to that step. Note that propagate keyword arguments that are common to all steps can be given directly to propagate_sequence.
  • storage: If storage=true, return a vector of storage objects as returned by propagate(…, storage=true) for each propagation step, instead of the state after each step. To use a pre-initialized storage, each Propagation in propagations should have a storage keyword argument instead.
  • pre_propagation: If not nothing, must be a function that receives the same arguments as propagate and returns a state. Called immediately before the propagate of each step, and the state returned by pre_propagation will become the initial state for the subsequent call to propagate. Generally, pre_propagation would be different in each step of the sequence, and should be given as a keyword argument in a particular Propagation.
  • post_propagation: If not nothing, a function that receives the same arguments as propagate and returns a state, see pre_propagation. The returned state becomes the initial state for the next step in the sequence (and may be further processed by the following pre_propagation). Like pre_propagation, this will generally be set as a keyword argument for a particular Propagation, not as a global keyword argument to propagate_sequence.

All other keyword arguments are forwarded to propagate. Thus, keyword arguments that are common to all steps in the sequence should be given as keyword arguments to propagate_sequence directly.

source
QuantumPropagators.reinit_prop!Method

Re-initialize a propagator.

reinit_prop!(propagator, state; kwargs...)

resets the propagator to state at the beginning of the time grid, respectively the end of the time grid if propagator.backward is true.

At a minimum, this is equivalent to a call to set_state! follow by a call to set_t!, but some propagators may have additional requirements on re-initialization, such as refreshing expansion coefficients for ChebyPropagator. In this case, the kwargs may be additional keyword arguments specific to the concrete type of propagator.

source
QuantumPropagators.reinit_prop!Method
reinit_prop!(
+)

takes an initial state and performs a sequence of propagate calls using the parameters in propagations. The initial state for each step in the sequence is the state resulting from the previous step. Optionally, before and after each step, a pre_propagation and post_propagation function may modify the state instantaneously, e.g., to perform a frame transformation. Return the vector of states at the end of each step (after any post_propagation, before any next pre_propagation of the next step).

Arguments

  • state: The initial state
  • propagations: A vector of Propagation instances, one per step in the sequence, each containing the arguments for the call to propagate for that step. The Propagation contains the generator and time grid for each step as positional parameters, or alternatively a pre-initialized Propagator, and any keyword arguments for propagate that are specific to that step. Note that propagate keyword arguments that are common to all steps can be given directly to propagate_sequence.
  • storage: If storage=true, return a vector of storage objects as returned by propagate(…, storage=true) for each propagation step, instead of the state after each step. To use a pre-initialized storage, each Propagation in propagations should have a storage keyword argument instead.
  • pre_propagation: If not nothing, must be a function that receives the same arguments as propagate and returns a state. Called immediately before the propagate of each step, and the state returned by pre_propagation will become the initial state for the subsequent call to propagate. Generally, pre_propagation would be different in each step of the sequence, and should be given as a keyword argument in a particular Propagation.
  • post_propagation: If not nothing, a function that receives the same arguments as propagate and returns a state, see pre_propagation. The returned state becomes the initial state for the next step in the sequence (and may be further processed by the following pre_propagation). Like pre_propagation, this will generally be set as a keyword argument for a particular Propagation, not as a global keyword argument to propagate_sequence.

All other keyword arguments are forwarded to propagate. Thus, keyword arguments that are common to all steps in the sequence should be given as keyword arguments to propagate_sequence directly.

source
QuantumPropagators.reinit_prop!Method

Re-initialize a propagator.

reinit_prop!(propagator, state; kwargs...)

resets the propagator to state at the beginning of the time grid, respectively the end of the time grid if propagator.backward is true.

At a minimum, this is equivalent to a call to set_state! follow by a call to set_t!, but some propagators may have additional requirements on re-initialization, such as refreshing expansion coefficients for ChebyPropagator. In this case, the kwargs may be additional keyword arguments specific to the concrete type of propagator.

source
QuantumPropagators.reinit_prop!Method
reinit_prop!(
     propagator::ChebyPropagator,
     state;
     transform_control_ranges=((c, ϵ_min, ϵ_max, check) => (ϵ_min, ϵ_max)),
@@ -204,9 +204,9 @@
     else
         return (min(ϵ_min, 5 * ϵ_min), max(ϵ_max, 5 * ϵ_max))
     end
-end

will re-calculate the Chebychev coefficients only if the current amplitudes differ by more than a factor of two from the ranges that were used when initializing the propagator (control_ranges parameter in init_prop, which would have had to overestimate the actual amplitudes by at least a factor of two). When re-calculating, the control_ranges will overestimate the amplitudes by a factor of five. With this transform_control_ranges, the propagation will be stable as long as the amplitudes do not change dynamically by more than a factor of 2.5 from their original range, while also not re-calculating coefficients unnecessarily in each pass because of modest changes in the amplitudes.

The transform_control_ranges argument is only relevant in the context of optimal control, where the same propagator will be used for many iterations with changing control field amplitudes.

All other keyword arguments are ignored.

source
QuantumPropagators.set_state!Method

Set the current state of the propagator.

set_state!(propagator, state)

sets the propagator.state property and returns propagator.state. In order to mutate the current state after a call to prop_step!, the following pattern is recommended:

Ψ = propagator.state
+end

will re-calculate the Chebychev coefficients only if the current amplitudes differ by more than a factor of two from the ranges that were used when initializing the propagator (control_ranges parameter in init_prop, which would have had to overestimate the actual amplitudes by at least a factor of two). When re-calculating, the control_ranges will overestimate the amplitudes by a factor of five. With this transform_control_ranges, the propagation will be stable as long as the amplitudes do not change dynamically by more than a factor of 2.5 from their original range, while also not re-calculating coefficients unnecessarily in each pass because of modest changes in the amplitudes.

The transform_control_ranges argument is only relevant in the context of optimal control, where the same propagator will be used for many iterations with changing control field amplitudes.

All other keyword arguments are ignored.

source
QuantumPropagators.set_state!Method

Set the current state of the propagator.

set_state!(propagator, state)

sets the propagator.state property and returns propagator.state. In order to mutate the current state after a call to prop_step!, the following pattern is recommended:

Ψ = propagator.state
 foo_mutate!(Ψ)
-set_state!(propagator, Ψ)

where foo_mutate! is some function that mutates Ψ. This is guaranteed to work efficiently both for in-place and not-in-place propagators, without incurring unnecessary copies.

Warning
foo_mutate!(propagator.state)

by itself is not a safe operation. Always follow it by

set_state!(propagator, propagator.state)

See also

source
QuantumPropagators.set_t!Method

Set the current time for the propagation.

set_t!(propagator, t)

Sets propagator.t to the given value of t, where t must be an element of propagator.tlist.

See also

source
QuantumPropagators.timings_enabledMethod

Check whether the collection of TimerOutputs data is active.

QuantumPropagators.timings_enabled()

returns true if QuantumPropagators.enable_timings() was called, and false otherwise or after QuantumPropagators.disable_timings().

source
QuantumPropagators.Generators.GeneratorType

A time-dependent generator.

Generator(ops::Vector{OT}, amplitudes::Vector{AT})

produces an object of type Generator{OT,AT} that represents

\[Ĥ(t)= Ĥ_0 + \sum_l a_l(\{ϵ_{l'}(t)\}, t) \, Ĥ_l\,,\]

where $Ĥ_l$ are the ops and $a_l(t)$ are the amplitudes. $Ĥ(t)$ and $Ĥ_l$ may represent operators in Hilbert space or super-operators in Liouville space. If the number of amplitudes is less than the number of ops, the first ops are considered as drift terms ($Ĥ_0$, respectively subsequent terms with $a_l ≡ 1$). At least one time-dependent amplitude is required. Each amplitude may depend on one or more control functions $ϵ_{l'}(t)$, although most typically $a_l(t) ≡ ϵ_l(t)$, that is, the amplitudes are simply a vector of the controls. See hamiltonian for details.

A Generator object should generally not be instantiated directly, but via hamiltonian or liouvillian.

The list of ops and amplitudes are properties of the Generator. They should not be mutated.

See also

  • Operator for static generators, which may be obtained from a Generator via evaluate.
source
QuantumPropagators.Generators.OperatorType

A static operator in Hilbert or Liouville space.

Operator(ops::Vector{OT}, coeffs::Vector{CT})

produces an object of type Operator{OT,CT} that encapsulates the "lazy" sum

\[Ĥ = \sum_l c_l Ĥ_l\,,\]

where $Ĥ_l$ are the ops and $c_l$ are the coeffs, which each must be a constant Number. If the number of coefficients is less than the number of operators, the first ops are considered to have $c_l = 1$.

An Operator object would generally not be instantiated directly, but be obtained from a Generator via evaluate.

The $Ĥ_l$ in the sum are considered immutable. This implies that an Operator can be updated in-place with evaluate! by only changing the coeffs.

source
QuantumPropagators.Generators.ScaledOperatorType

A static operator with a scalar pre-factor.

op = ScaledOperator(α, Ĥ)

represents the "lazy" product $α Ĥ$ where $Ĥ$ is an operator (typically an Operator instance) and $α$ is a scalar.

source
QuantumPropagators.Generators.hamiltonianMethod

Initialize a (usually time-dependent) Hamiltonian.

The most common usage is, e.g.,

using QuantumPropagators
+set_state!(propagator, Ψ)

where foo_mutate! is some function that mutates Ψ. This is guaranteed to work efficiently both for in-place and not-in-place propagators, without incurring unnecessary copies.

Warning
foo_mutate!(propagator.state)

by itself is not a safe operation. Always follow it by

set_state!(propagator, propagator.state)

See also

source
QuantumPropagators.set_t!Method

Set the current time for the propagation.

set_t!(propagator, t)

Sets propagator.t to the given value of t, where t must be an element of propagator.tlist.

See also

source
QuantumPropagators.timings_enabledMethod

Check whether the collection of TimerOutputs data is active.

QuantumPropagators.timings_enabled()

returns true if QuantumPropagators.enable_timings() was called, and false otherwise or after QuantumPropagators.disable_timings().

source
QuantumPropagators.Generators.GeneratorType

A time-dependent generator.

Generator(ops::Vector{OT}, amplitudes::Vector{AT})

produces an object of type Generator{OT,AT} that represents

\[Ĥ(t)= Ĥ_0 + \sum_l a_l(\{ϵ_{l'}(t)\}, t) \, Ĥ_l\,,\]

where $Ĥ_l$ are the ops and $a_l(t)$ are the amplitudes. $Ĥ(t)$ and $Ĥ_l$ may represent operators in Hilbert space or super-operators in Liouville space. If the number of amplitudes is less than the number of ops, the first ops are considered as drift terms ($Ĥ_0$, respectively subsequent terms with $a_l ≡ 1$). At least one time-dependent amplitude is required. Each amplitude may depend on one or more control functions $ϵ_{l'}(t)$, although most typically $a_l(t) ≡ ϵ_l(t)$, that is, the amplitudes are simply a vector of the controls. See hamiltonian for details.

A Generator object should generally not be instantiated directly, but via hamiltonian or liouvillian.

The list of ops and amplitudes are properties of the Generator. They should not be mutated.

See also

  • Operator for static generators, which may be obtained from a Generator via evaluate.
source
QuantumPropagators.Generators.OperatorType

A static operator in Hilbert or Liouville space.

Operator(ops::Vector{OT}, coeffs::Vector{CT})

produces an object of type Operator{OT,CT} that encapsulates the "lazy" sum

\[Ĥ = \sum_l c_l Ĥ_l\,,\]

where $Ĥ_l$ are the ops and $c_l$ are the coeffs, which each must be a constant Number. If the number of coefficients is less than the number of operators, the first ops are considered to have $c_l = 1$.

An Operator object would generally not be instantiated directly, but be obtained from a Generator via evaluate.

The $Ĥ_l$ in the sum are considered immutable. This implies that an Operator can be updated in-place with evaluate! by only changing the coeffs.

source
QuantumPropagators.Generators.ScaledOperatorType

A static operator with a scalar pre-factor.

op = ScaledOperator(α, Ĥ)

represents the "lazy" product $α Ĥ$ where $Ĥ$ is an operator (typically an Operator instance) and $α$ is a scalar.

source
QuantumPropagators.Generators.hamiltonianMethod

Initialize a (usually time-dependent) Hamiltonian.

The most common usage is, e.g.,

using QuantumPropagators
 
 H₀ = ComplexF64[0 0; 0 1];
 H₁ = ComplexF64[0 1; 1 0];
@@ -231,33 +231,33 @@
  ops::Vector{Matrix{ComplexF64}}:
   ComplexF64[0.0 + 0.0im 0.0 + 0.0im; 0.0 + 0.0im 1.0 + 0.0im]
   ComplexF64[0.0 + 0.0im 1.0 + 0.0im; 1.0 + 0.0im 0.0 + 0.0im]
- coeffs: [2.0]

The hamiltonian function may generate warnings if the terms are of an unexpected type or structure. These can be suppressed with check=false.

source
QuantumPropagators.Generators.liouvillianFunction

Construct a Liouvillian Generator.

ℒ = liouvillian(Ĥ, c_ops=(); convention=:LvN, check=true)

calculates the sparse Liouvillian super-operator from the Hamiltonian and a list c_ops of Lindblad operators.

With convention=:LvN, applying the resulting to a vectorized density matrix ρ⃗ calculates $\frac{d}{dt} \vec{\rho}(t) = ℒ \vec{\rho}(t)$ equivalent to the Liouville-von-Neumann equation for the density matrix $ρ̂$,

\[\frac{d}{dt} ρ̂(t) + coeffs: [2.0]

The hamiltonian function may generate warnings if the terms are of an unexpected type or structure. These can be suppressed with check=false.

source
QuantumPropagators.Generators.liouvillianFunction

Construct a Liouvillian Generator.

ℒ = liouvillian(Ĥ, c_ops=(); convention=:LvN, check=true)

calculates the sparse Liouvillian super-operator from the Hamiltonian and a list c_ops of Lindblad operators.

With convention=:LvN, applying the resulting to a vectorized density matrix ρ⃗ calculates $\frac{d}{dt} \vec{\rho}(t) = ℒ \vec{\rho}(t)$ equivalent to the Liouville-von-Neumann equation for the density matrix $ρ̂$,

\[\frac{d}{dt} ρ̂(t) = -i [Ĥ, ρ̂(t)] + \sum_k\left( Â_k ρ̂ Â_k^\dagger - \frac{1}{2} A_k^\dagger Â_k ρ̂ - \frac{1}{2} ρ̂ Â_k^\dagger Â_k - \right)\,,\]

where the Lindblad operators $Â_k$ are the elements of c_ops.

The Hamiltonian $Ĥ$ will generally be time-dependent. For example, it may be a Generator as returned by hamiltonian. For example, for a Hamiltonian with the terms (Ĥ₀, (Ĥ₁, ϵ₁), (Ĥ₂, ϵ₂)), where Ĥ₀, Ĥ₁, Ĥ₂ are matrices, and ϵ₁ and ϵ₂ are functions of time, the resulting will be a Generator corresponding to terms (ℒ₀, (ℒ₁, ϵ₁), (ℒ₂, ϵ₂)), where the initial terms is the superoperator ℒ₀ for the static component of the Liouvillian, i.e., the commutator with the drift Hamiltonian Ĥ₀, plus the dissipator (sum over $k$), as a sparse matrix. Time-dependent Lindblad operators are not currently supported. The remaining elements are tuples (ℒ₁, ϵ₁) and (ℒ₂, ϵ₂) corresponding to the commutators with the two control Hamiltonians, where ℒ₁ and ℒ₂ again are sparse matrices.

If $Ĥ$ is not time-dependent, the resulting will likewise be a static operator. Passing H=nothing with non-empty c_ops initializes a pure dissipator.

With convention=:TDSE, the Liouvillian will be constructed for the equation of motion $i \hbar \frac{d}{dt} \vec{\rho}(t) = ℒ \vec{\rho}(t)$ to match exactly the form of the time-dependent Schrödinger equation. While this notation is not standard in the literature of open quantum systems, it has the benefit that the resulting can be used in a numerical propagator for a (non-Hermitian) Schrödinger equation without any change. Thus, for numerical applications, convention=:TDSE is generally preferred. The returned between the two conventions differs only by a factor of $i$, since we generally assume $\hbar=1$.

The convention keyword argument is mandatory, to force a conscious choice.

See Goerz et. al. "Optimal control theory for a unitary operation under dissipative evolution", arXiv 1312.0111v2, Appendix B.2 for the explicit construction of the Liouvillian superoperator as a sparse matrix.

Passing check=false, suppresses warnings and errors about unexpected types or the structure of the arguments, cf. hamiltonian.

source
QuantumPropagators.Arnoldi.arnoldi!Method
m = arnoldi!(Hess, q, m, Ψ, H, dt=1.0; extended=true, norm_min=1e-15)

Calculate the Hessenberg matrix and Arnoldi vectors of H dt, from Ψ.

For a given order m, the m×m Hessemberg matrix is calculated and stored in in the pre-allocated Hess. Further an array of m normalized Arnoldi vectors is stored in in the pre-allocated q, plus one additional unnormalized Arnoldi vector. The unnormalized m+1st vector could be used to easily extend a given m×m Hessenberg matrix to a (m+1)×(m+1) matrix.

If the extended Hessenberg matrix is requested (extended=true, default), the m+1st Arnoldi vector is also normalized, and it's norm will be stored in m+1, m entry of the (extended) Hessenberg matrix, which is an (m+1)×(m+1) matrix.

Return the size m of the calculated Hessenberg matrix. This will usually be the input m, except when the Krylov dimension of H starting from Ψ is less then m. E.g., if Ψ is an eigenstate of H, the returned m will be 1.

See https://en.wikipedia.org/wiki/Arnoldi_iteration for a description of the algorithm.

Arguments

  • Hess::Matrix{ComplexF64}: Pre-allocated storage for the Hessemberg matrix. Can be uninitialized on input. The matrix must be at least of size m×m, or (m+1)×(m+1) if extended=true. On output, the m×m sub-matrix of Hess (with the returned output m) will contain the Hessenberg matrix, and all other elements of Hess be be set to zero.
  • q: Pre-allocated array of states similar to Ψ, as storage for the calculated Arnoldi vectors. These may be un-initialized on input. Must be at least of length m+1
  • m: The requested dimensions of the output Hessenberg matrix.
  • Ψ: The starting vector for the Arnoldi procedure. This can be of any type, as long as Φ = H * Ψ results in a vector similar to Ψ, there is an inner products of Φ and Ψ (Ψ⋅Φ is defined), and norm(Ψ) is defined.
  • H: The operator (up to dt) for which to calculate the Arnoldi procedure. Can be of any type, as long as H * Ψ is defined.
  • dt: The implicit time step; the total operator for which to calculate the Arnoldi procedure is H * dt
  • extended: If true (default), calculate the extended Hessenberg matrix, and normalized the final Arnoldi vector
  • norm_min: the minimum value of the norm of Ψ at which Ψ should be considered the zero vector
source
QuantumPropagators.Arnoldi.diagonalize_hessenberg_matrixMethod
diagonalize_hessenberg_matrix(Hess, m; accumulate=false)

Diagonalize the m × m top left submatrix of the given Hessenberg matrix.

If accumulate is true, return the concatenated eigenvalues for Hess[1:1,1:1] to Hess[1:m,1:m], that is, all sumatrices of size 1 through m.

source
QuantumPropagators.Arnoldi.extend_arnoldi!Function

Extend dimension of Hessenberg matrix by one.

extend_arnoldi!(Hess, q, m, H, dt; norm_min=1e-15)

extends the entries in Hess from size (m-1)×(m-1) to size m×m, and the list q of Arnoldi vectors from m to (m+1). It is assumed that the input Hess was created by a call to arnoldi! with extended=false or a previous call to extend_arnoldi!. Note that Hess itself is not resized, so it must be allocated to size m×m or greater on input.

source
QuantumPropagators.Interfaces.check_amplitudeMethod

Check amplitude appearing in Generator.

@test check_amplitude(ampl; tlist, quiet=false)

verifies that the given ampl is a valid element in the list of amplitudes of a Generator object. Specifically:

If for_parameterization (may require the RecursiveArrayTools package to be loaded):

  • get_parameters(ampl) must be defined and return a vector of floats. Mutating that vector must mutate the controls inside the ampl.

The function returns true for a valid amplitude and false for an invalid amplitude. Unless quiet=true, it will log an error to indicate which of the conditions failed.

source
QuantumPropagators.Interfaces.check_controlMethod

Check that control can be evaluated on a time grid.

@test check_control(
+  \right)\,,\]

where the Lindblad operators $Â_k$ are the elements of c_ops.

The Hamiltonian $Ĥ$ will generally be time-dependent. For example, it may be a Generator as returned by hamiltonian. For example, for a Hamiltonian with the terms (Ĥ₀, (Ĥ₁, ϵ₁), (Ĥ₂, ϵ₂)), where Ĥ₀, Ĥ₁, Ĥ₂ are matrices, and ϵ₁ and ϵ₂ are functions of time, the resulting will be a Generator corresponding to terms (ℒ₀, (ℒ₁, ϵ₁), (ℒ₂, ϵ₂)), where the initial terms is the superoperator ℒ₀ for the static component of the Liouvillian, i.e., the commutator with the drift Hamiltonian Ĥ₀, plus the dissipator (sum over $k$), as a sparse matrix. Time-dependent Lindblad operators are not currently supported. The remaining elements are tuples (ℒ₁, ϵ₁) and (ℒ₂, ϵ₂) corresponding to the commutators with the two control Hamiltonians, where ℒ₁ and ℒ₂ again are sparse matrices.

If $Ĥ$ is not time-dependent, the resulting will likewise be a static operator. Passing H=nothing with non-empty c_ops initializes a pure dissipator.

With convention=:TDSE, the Liouvillian will be constructed for the equation of motion $i \hbar \frac{d}{dt} \vec{\rho}(t) = ℒ \vec{\rho}(t)$ to match exactly the form of the time-dependent Schrödinger equation. While this notation is not standard in the literature of open quantum systems, it has the benefit that the resulting can be used in a numerical propagator for a (non-Hermitian) Schrödinger equation without any change. Thus, for numerical applications, convention=:TDSE is generally preferred. The returned between the two conventions differs only by a factor of $i$, since we generally assume $\hbar=1$.

The convention keyword argument is mandatory, to force a conscious choice.

See Goerz et. al. "Optimal control theory for a unitary operation under dissipative evolution", arXiv 1312.0111v2, Appendix B.2 for the explicit construction of the Liouvillian superoperator as a sparse matrix.

Passing check=false, suppresses warnings and errors about unexpected types or the structure of the arguments, cf. hamiltonian.

source
QuantumPropagators.Arnoldi.arnoldi!Method
m = arnoldi!(Hess, q, m, Ψ, H, dt=1.0; extended=true, norm_min=1e-15)

Calculate the Hessenberg matrix and Arnoldi vectors of H dt, from Ψ.

For a given order m, the m×m Hessemberg matrix is calculated and stored in in the pre-allocated Hess. Further an array of m normalized Arnoldi vectors is stored in in the pre-allocated q, plus one additional unnormalized Arnoldi vector. The unnormalized m+1st vector could be used to easily extend a given m×m Hessenberg matrix to a (m+1)×(m+1) matrix.

If the extended Hessenberg matrix is requested (extended=true, default), the m+1st Arnoldi vector is also normalized, and it's norm will be stored in m+1, m entry of the (extended) Hessenberg matrix, which is an (m+1)×(m+1) matrix.

Return the size m of the calculated Hessenberg matrix. This will usually be the input m, except when the Krylov dimension of H starting from Ψ is less then m. E.g., if Ψ is an eigenstate of H, the returned m will be 1.

See https://en.wikipedia.org/wiki/Arnoldi_iteration for a description of the algorithm.

Arguments

  • Hess::Matrix{ComplexF64}: Pre-allocated storage for the Hessemberg matrix. Can be uninitialized on input. The matrix must be at least of size m×m, or (m+1)×(m+1) if extended=true. On output, the m×m sub-matrix of Hess (with the returned output m) will contain the Hessenberg matrix, and all other elements of Hess be be set to zero.
  • q: Pre-allocated array of states similar to Ψ, as storage for the calculated Arnoldi vectors. These may be un-initialized on input. Must be at least of length m+1
  • m: The requested dimensions of the output Hessenberg matrix.
  • Ψ: The starting vector for the Arnoldi procedure. This can be of any type, as long as Φ = H * Ψ results in a vector similar to Ψ, there is an inner products of Φ and Ψ (Ψ⋅Φ is defined), and norm(Ψ) is defined.
  • H: The operator (up to dt) for which to calculate the Arnoldi procedure. Can be of any type, as long as H * Ψ is defined.
  • dt: The implicit time step; the total operator for which to calculate the Arnoldi procedure is H * dt
  • extended: If true (default), calculate the extended Hessenberg matrix, and normalized the final Arnoldi vector
  • norm_min: the minimum value of the norm of Ψ at which Ψ should be considered the zero vector
source
QuantumPropagators.Arnoldi.diagonalize_hessenberg_matrixMethod
diagonalize_hessenberg_matrix(Hess, m; accumulate=false)

Diagonalize the m × m top left submatrix of the given Hessenberg matrix.

If accumulate is true, return the concatenated eigenvalues for Hess[1:1,1:1] to Hess[1:m,1:m], that is, all sumatrices of size 1 through m.

source
QuantumPropagators.Arnoldi.extend_arnoldi!Function

Extend dimension of Hessenberg matrix by one.

extend_arnoldi!(Hess, q, m, H, dt; norm_min=1e-15)

extends the entries in Hess from size (m-1)×(m-1) to size m×m, and the list q of Arnoldi vectors from m to (m+1). It is assumed that the input Hess was created by a call to arnoldi! with extended=false or a previous call to extend_arnoldi!. Note that Hess itself is not resized, so it must be allocated to size m×m or greater on input.

source
QuantumPropagators.Interfaces.check_amplitudeMethod

Check amplitude appearing in Generator.

@test check_amplitude(ampl; tlist, quiet=false)

verifies that the given ampl is a valid element in the list of amplitudes of a Generator object. Specifically:

If for_parameterization (may require the RecursiveArrayTools package to be loaded):

  • get_parameters(ampl) must be defined and return a vector of floats. Mutating that vector must mutate the controls inside the ampl.

The function returns true for a valid amplitude and false for an invalid amplitude. Unless quiet=true, it will log an error to indicate which of the conditions failed.

source
QuantumPropagators.Interfaces.check_controlMethod

Check that control can be evaluated on a time grid.

@test check_control(
     control;
     tlist,
     for_parameterization=true,
     for_time_continuous=(control isa Function),
     quiet=false
-)

verifies the given control (one of the elements of the tuple returned by get_controls):

If for_time_continuous:

If for_parameterization:

  • get_parameters(control) must be defined and return a vector of floats. Mutating that vector must mutate the control.

The function returns true for a valid control and false for an invalid control. Unless quiet=true, it will log an error to indicate which of the conditions failed.

source
QuantumPropagators.Interfaces.check_generatorMethod

Check the dynamical generator for propagating state over tlist.

@test check_generator(
+)

verifies the given control (one of the elements of the tuple returned by get_controls):

If for_time_continuous:

If for_parameterization:

  • get_parameters(control) must be defined and return a vector of floats. Mutating that vector must mutate the control.

The function returns true for a valid control and false for an invalid control. Unless quiet=true, it will log an error to indicate which of the conditions failed.

source
QuantumPropagators.Interfaces.check_generatorMethod

Check the dynamical generator for propagating state over tlist.

@test check_generator(
     generator; state, tlist,
     for_pwc=true, for_time_continuous=false,
     for_expval=true, for_parameterization=false,
-    atol=1e-14, quiet=false)

verifies the given generator:

If for_pwc (default):

If for_time_continuous:

If for_parameterization (may require the RecursiveArrayTools package to be loaded):

  • get_parameters(generator) must be defined and return a vector of floats. Mutating that vector must mutate the controls inside the generator.

The function returns true for a valid generator and false for an invalid generator. Unless quiet=true, it will log an error to indicate which of the conditions failed.

source
QuantumPropagators.Interfaces.check_operatorMethod

Check that op is a valid operator that can be applied to state.

@test check_operator(op; state, tlist=[0.0, 1.0],
-                     for_expval=true, atol=1e-14, quiet=false)

verifies the given op relative to state. The state must pass check_state.

An "operator" is any object that evaluate returns when evaluating a time-dependent dynamic generator. The specific requirements for op are:

If QuantumPropagators.Interfaces.supports_inplace(state):

  • The 3-argument LinearAlgebra.mul! must apply op to the given state
  • The 5-argument LinearAlgebra.mul! must apply op to the given state
  • LinearAlgebra.mul! must match *, if applicable
  • LinearAlgebra.mul! must return the resulting state

If for_expval (typically required for optimal control):

  • LinearAlgebra.dot(state, op, state) must return return a number
  • dot(state, op, state) must match dot(state, op * state), if applicable

The function returns true for a valid operator and false for an invalid operator. Unless quiet=true, it will log an error to indicate which of the conditions failed.

source
QuantumPropagators.Interfaces.check_parameterizedMethod

Check that that the object supports the parameterization interface.

@test check_parameterized(object; name="::$typeof(object))", quiet=false)

verifies that the given object:

  • can be passed to get_parameters, which must return an AbstractVector of Float64
  • is mutated by mutating the parameters obtained by get_parameters

See also

source
QuantumPropagators.Interfaces.check_parameterized_functionMethod

Check a ParameterizedFunction instance.

@test check_parameterized_function(f; tlist; quiet=false)

verifies that the given f:

See also

source
QuantumPropagators.Interfaces.check_propagatorMethod

Check that the given propagator implements the required interface.

@test check_propagator(propagator; atol=1e-14, quiet=false)

verifies that the propagator matches the interface described for an AbstractPropagator. The propagator must have been freshly initialized with init_prop.

  • propagator must have the properties state, tlist, t, parameters, backward, and inplace
  • propagator.state must be a valid state (see check_state)
  • If propagator.inplace is true, supports_inplace for propagator.state must also be true
  • propagator.tlist must be monotonically increasing.
  • propagator.t must be the first or last element of propagator.tlist, depending on propagator.backward
  • prop_step!(propagator) must be defined and return a valid state until the time grid is exhausted
  • For an in-place propagator, the state returned by prop_step! must be the propagator.state object
  • For a not-in-place propagator, the state returned by prop_step! must be a new object
  • prop_step! must advance propagator.t forward or backward one step on the time grid
  • prop_step! must return nothing when going beyond the time grid
  • set_t!(propagator, t) must be defined and set propagator.t
  • set_state!(propagator, state) must be defined and set propagator.state.
  • set_state!(propagator, state) for an in-place propagator must overwrite propagator.state in-place.
  • set_state! must return the set propagator.state
  • In a PiecewisePropagator, propagator.parameters must be a dict mapping controls to a vector of values, one for each interval on propagator.tlist
  • reinit_prop! must be defined and re-initialize the propagator
  • reinit_prop!(propagator, state) must be idempotent. That is, repeated calls to reinit_prop! leave the propagator unchanged.

The function returns true for a valid propagator and false for an invalid propagator. Unless quiet=true, it will log an error to indicate which of the conditions failed.

source
QuantumPropagators.Interfaces.check_stateMethod

Check that state is a valid element of a Hilbert space.

@test check_state(state; normalized=false, atol=1e-15, quiet=false)

verifies the following requirements:

  • The inner product (LinearAlgebra.dot) of two states must return a Complex number.
  • The LinearAlgebra.norm of state must be defined via the inner product. This is the definition of a Hilbert space, a.k.a a "complete inner product space" or more precisely a "Banach space (normed vector space) where the norm is induced by an inner product".
  • The `QuantumPropagators.Interfaces.supports_inplace method must be defined for state

Any state must support the following not-in-place operations:

  • state + state and state - state must be defined
  • copy(state) must be defined and return an object of the same type as state
  • c * state for a scalar c must be defined
  • norm(state + state) must fulfill the triangle inequality
  • zero(state) must be defined and produce a state with norm 0
  • 0.0 * state must produce a state with norm 0
  • copy(state) - state must have norm 0
  • norm(state) must have absolute homogeneity: norm(s * state) = s * norm(state)

If supports_inplace(state) is true, the state must also support the following:

  • similar(state) must be defined and return a valid state of the same type a state
  • copyto!(other, state) must be defined
  • fill!(state, c) must be defined
  • LinearAlgebra.lmul!(c, state) for a scalar c must be defined
  • LinearAlgebra.axpy!(c, state, other) must be defined
  • norm(state) must fulfill the same general mathematical norm properties as for the non-in-place norm.

If normalized (not required by default):

  • LinearAlgebra.norm(state) must be 1

It is strongly recommended to always support immutable operations (also for mutable states)

The function returns true for a valid state and false for an invalid state. Unless quiet=true, it will log an error to indicate which of the conditions failed.

source
QuantumPropagators.Interfaces.check_tlistMethod

Check that the given tlist is valid.

@test check_tlist(tlist; quiet=false)

verifies the given time grid. A valid time grid must

  • be a Vector{Float64},
  • contain at least two points (beginning and end),
  • be monotonically increasing

The function returns true for a valid time grid and false for an invalid time grid. Unless quiet=true, it will log an error to indicated which of the conditions failed.

source
QuantumPropagators.Interfaces.supports_inplaceMethod

Indicate whether a given state or operator supports in-place operations

supports_inplace(state)

Indicates that propagators can assume that the in-place requirements defined in QuantumPropagators.Interfaces.check_state hold. States with in-place support must also fulfill specific properties when interacting with operators, see QuantumPropagators.Interfaces.check_operator.

supports_inplace(op)

Indicates that the operator can be evaluated in-place with evaluate!, see QuantumPropagators.Interfaces.check_generator

Note that supports_inplace is not quite the same as Base.ismutable: When using custom structs for states or operators, even if those structs are not defined as mutable, they may still define the in-place interface (typically because their components are mutable).

source
QuantumPropagators.Controls.ParameterizedFunctionType

Abstract type for function-like objects with parameters.

A struct that is an implementation of a ParameterizedFunction:

  • must have a parameters field that is an AbstractVector of floats (e.g., a ComponentArrays.ComponentVector)
  • must be callable with a single float argument t,
  • may define getters and setters for referencing the values in parameters with convenient names.

The parameters field of any ParameterizedFunction can be accessed via get_parameters.

See How to define a parameterized control for an example. You may use the QuantumPropagators.Interfaces.check_parameterized_function to check the implementation of a ParameterizedFunction subtype.

source
QuantumPropagators.Controls.discretizeMethod

Evaluate control at every point of tlist.

values = discretize(control, tlist; via_midpoints=true)

discretizes the given control to a Vector of values defined on the points of tlist.

If control is a function, it is first evaluated at the midpoint of tlist, see discretize_on_midpoints, and then the values on the midpoints are converted to values on tlist. This discretization is more stable than directly evaluating the control function at the values of tlist, and ensures that repeated round-trips between discretize and discretize_on_midpoints can be done safely, see the note in the documentation of discretize_on_midpoints.

The latter can still be achieved by passing via_midpoints=false. While such a direct discretization is suitable e.g. for plotting, but it is unsuitable for round-trips between discretize and discretize_on_midpoints (constant controls on tlist may result in a zig-zag on the intervals of tlist).

If control is a vector, a copy of control will be returned if it is of the same length as tlist. Otherwise, control must have one less value than tlist, and is assumed to be defined on the midpoints of tlist. In that case, discretize acts as the inverse of discretize_on_midpoints. See discretize_on_midpoints for how control values on tlist and control values on the intervals of tlist are related.

source
QuantumPropagators.Controls.discretize_on_midpointsMethod

Evaluate control at the midpoints of tlist.

values = discretize_on_midpoints(control, tlist)

discretizes the given control to a Vector of values on the midpoints of tlist. Hence, the resulting values will contain one less value than tlist.

If control is a vector of values defined on tlist (i.e., of the same length as tlist), it will be converted to a vector of values on the intervals of tlist. The value for the first and last "midpoint" will remain the original values at the beginning and end of tlist, in order to ensure exact boundary conditions. For all other midpoints, the value for that midpoint will be calculated by "un-averaging".

For example, for a control and tlist of length 5, consider the following diagram:

tlist index:       1   2   3   4   5
+    atol=1e-14, quiet=false)

verifies the given generator:

If for_pwc (default):

If for_time_continuous:

If for_parameterization (may require the RecursiveArrayTools package to be loaded):

  • get_parameters(generator) must be defined and return a vector of floats. Mutating that vector must mutate the controls inside the generator.

The function returns true for a valid generator and false for an invalid generator. Unless quiet=true, it will log an error to indicate which of the conditions failed.

source
QuantumPropagators.Interfaces.check_operatorMethod

Check that op is a valid operator that can be applied to state.

@test check_operator(op; state, tlist=[0.0, 1.0],
+                     for_expval=true, atol=1e-14, quiet=false)

verifies the given op relative to state. The state must pass check_state.

An "operator" is any object that evaluate returns when evaluating a time-dependent dynamic generator. The specific requirements for op are:

If QuantumPropagators.Interfaces.supports_inplace(state):

  • The 3-argument LinearAlgebra.mul! must apply op to the given state
  • The 5-argument LinearAlgebra.mul! must apply op to the given state
  • LinearAlgebra.mul! must match *, if applicable
  • LinearAlgebra.mul! must return the resulting state

If for_expval (typically required for optimal control):

  • LinearAlgebra.dot(state, op, state) must return return a number
  • dot(state, op, state) must match dot(state, op * state), if applicable

The function returns true for a valid operator and false for an invalid operator. Unless quiet=true, it will log an error to indicate which of the conditions failed.

source
QuantumPropagators.Interfaces.check_parameterizedMethod

Check that that the object supports the parameterization interface.

@test check_parameterized(object; name="::$typeof(object))", quiet=false)

verifies that the given object:

  • can be passed to get_parameters, which must return an AbstractVector of Float64
  • is mutated by mutating the parameters obtained by get_parameters

See also

source
QuantumPropagators.Interfaces.check_parameterized_functionMethod

Check a ParameterizedFunction instance.

@test check_parameterized_function(f; tlist; quiet=false)

verifies that the given f:

See also

source
QuantumPropagators.Interfaces.check_propagatorMethod

Check that the given propagator implements the required interface.

@test check_propagator(propagator; atol=1e-14, quiet=false)

verifies that the propagator matches the interface described for an AbstractPropagator. The propagator must have been freshly initialized with init_prop.

  • propagator must have the properties state, tlist, t, parameters, backward, and inplace
  • propagator.state must be a valid state (see check_state)
  • If propagator.inplace is true, supports_inplace for propagator.state must also be true
  • propagator.tlist must be monotonically increasing.
  • propagator.t must be the first or last element of propagator.tlist, depending on propagator.backward
  • prop_step!(propagator) must be defined and return a valid state until the time grid is exhausted
  • For an in-place propagator, the state returned by prop_step! must be the propagator.state object
  • For a not-in-place propagator, the state returned by prop_step! must be a new object
  • prop_step! must advance propagator.t forward or backward one step on the time grid
  • prop_step! must return nothing when going beyond the time grid
  • set_t!(propagator, t) must be defined and set propagator.t
  • set_state!(propagator, state) must be defined and set propagator.state.
  • set_state!(propagator, state) for an in-place propagator must overwrite propagator.state in-place.
  • set_state! must return the set propagator.state
  • In a PiecewisePropagator, propagator.parameters must be a dict mapping controls to a vector of values, one for each interval on propagator.tlist
  • reinit_prop! must be defined and re-initialize the propagator
  • reinit_prop!(propagator, state) must be idempotent. That is, repeated calls to reinit_prop! leave the propagator unchanged.

The function returns true for a valid propagator and false for an invalid propagator. Unless quiet=true, it will log an error to indicate which of the conditions failed.

source
QuantumPropagators.Interfaces.check_stateMethod

Check that state is a valid element of a Hilbert space.

@test check_state(state; normalized=false, atol=1e-15, quiet=false)

verifies the following requirements:

  • The inner product (LinearAlgebra.dot) of two states must return a Complex number.
  • The LinearAlgebra.norm of state must be defined via the inner product. This is the definition of a Hilbert space, a.k.a a "complete inner product space" or more precisely a "Banach space (normed vector space) where the norm is induced by an inner product".
  • The `QuantumPropagators.Interfaces.supports_inplace method must be defined for state

Any state must support the following not-in-place operations:

  • state + state and state - state must be defined
  • copy(state) must be defined and return an object of the same type as state
  • c * state for a scalar c must be defined
  • norm(state + state) must fulfill the triangle inequality
  • zero(state) must be defined and produce a state with norm 0
  • 0.0 * state must produce a state with norm 0
  • copy(state) - state must have norm 0
  • norm(state) must have absolute homogeneity: norm(s * state) = s * norm(state)

If supports_inplace(state) is true, the state must also support the following:

  • similar(state) must be defined and return a valid state of the same type a state
  • copyto!(other, state) must be defined
  • fill!(state, c) must be defined
  • LinearAlgebra.lmul!(c, state) for a scalar c must be defined
  • LinearAlgebra.axpy!(c, state, other) must be defined
  • norm(state) must fulfill the same general mathematical norm properties as for the non-in-place norm.

If normalized (not required by default):

  • LinearAlgebra.norm(state) must be 1

It is strongly recommended to always support immutable operations (also for mutable states)

The function returns true for a valid state and false for an invalid state. Unless quiet=true, it will log an error to indicate which of the conditions failed.

source
QuantumPropagators.Interfaces.check_tlistMethod

Check that the given tlist is valid.

@test check_tlist(tlist; quiet=false)

verifies the given time grid. A valid time grid must

  • be a Vector{Float64},
  • contain at least two points (beginning and end),
  • be monotonically increasing

The function returns true for a valid time grid and false for an invalid time grid. Unless quiet=true, it will log an error to indicated which of the conditions failed.

source
QuantumPropagators.Interfaces.supports_inplaceMethod

Indicate whether a given state or operator supports in-place operations

supports_inplace(state)

Indicates that propagators can assume that the in-place requirements defined in QuantumPropagators.Interfaces.check_state hold. States with in-place support must also fulfill specific properties when interacting with operators, see QuantumPropagators.Interfaces.check_operator.

supports_inplace(op)

Indicates that the operator can be evaluated in-place with evaluate!, see QuantumPropagators.Interfaces.check_generator

Note that supports_inplace is not quite the same as Base.ismutable: When using custom structs for states or operators, even if those structs are not defined as mutable, they may still define the in-place interface (typically because their components are mutable).

source
QuantumPropagators.Controls.ParameterizedFunctionType

Abstract type for function-like objects with parameters.

A struct that is an implementation of a ParameterizedFunction:

  • must have a parameters field that is an AbstractVector of floats (e.g., a ComponentArrays.ComponentVector)
  • must be callable with a single float argument t,
  • may define getters and setters for referencing the values in parameters with convenient names.

The parameters field of any ParameterizedFunction can be accessed via get_parameters.

See How to define a parameterized control for an example. You may use the QuantumPropagators.Interfaces.check_parameterized_function to check the implementation of a ParameterizedFunction subtype.

source
QuantumPropagators.Controls.discretizeMethod

Evaluate control at every point of tlist.

values = discretize(control, tlist; via_midpoints=true)

discretizes the given control to a Vector of values defined on the points of tlist.

If control is a function, it is first evaluated at the midpoint of tlist, see discretize_on_midpoints, and then the values on the midpoints are converted to values on tlist. This discretization is more stable than directly evaluating the control function at the values of tlist, and ensures that repeated round-trips between discretize and discretize_on_midpoints can be done safely, see the note in the documentation of discretize_on_midpoints.

The latter can still be achieved by passing via_midpoints=false. While such a direct discretization is suitable e.g. for plotting, but it is unsuitable for round-trips between discretize and discretize_on_midpoints (constant controls on tlist may result in a zig-zag on the intervals of tlist).

If control is a vector, a copy of control will be returned if it is of the same length as tlist. Otherwise, control must have one less value than tlist, and is assumed to be defined on the midpoints of tlist. In that case, discretize acts as the inverse of discretize_on_midpoints. See discretize_on_midpoints for how control values on tlist and control values on the intervals of tlist are related.

source
QuantumPropagators.Controls.discretize_on_midpointsMethod

Evaluate control at the midpoints of tlist.

values = discretize_on_midpoints(control, tlist)

discretizes the given control to a Vector of values on the midpoints of tlist. Hence, the resulting values will contain one less value than tlist.

If control is a vector of values defined on tlist (i.e., of the same length as tlist), it will be converted to a vector of values on the intervals of tlist. The value for the first and last "midpoint" will remain the original values at the beginning and end of tlist, in order to ensure exact boundary conditions. For all other midpoints, the value for that midpoint will be calculated by "un-averaging".

For example, for a control and tlist of length 5, consider the following diagram:

tlist index:       1   2   3   4   5
 tlist:             ⋅   ⋅   ⋅   ⋅   ⋅   input values cᵢ (i ∈ 1..5)
                    |̂/ ̄ ̄ ̂\ / ̂\ / ̂ ̄ ̄\|̂
 midpoints:         x     x   x     x   output values pᵢ (i ∈ 1..4)
-midpoints index:   1     2   3     4

We will have $p₁=c₁$ for the first value, $p₄=c₅$ for the last value. For all other points, the control values $cᵢ = \frac{p_{i-1} + p_{i}}{2}$ are the average of the values on the midpoints. This implies the "un-averaging" for the midpoint values $pᵢ = 2 c_{i} - p_{i-1}$.

Note

An arbitrary input control array may not be compatible with the above averaging formula. In this case, the conversion will be "lossy" (discretize will not recover the original control array; the difference should be considered a "discretization error"). However, any further round-trip conversions between points and intervals are bijective and preserve the boundary conditions. In this case, the discretize_on_midpoints and discretize methods are each other's inverse. This also implies that for an optimal control procedure, it is safe to modify midpoint values. Modifying the the values on the time grid directly on the other hand may accumulate discretization errors.

If control is a vector of one less length than tlist, a copy of control will be returned, under the assumption that the input is already properly discretized.

If control is a function, the function will be directly evaluated at the midpoints marked as x in the above diagram..

See also

  • get_tlist_midpoints – get all the midpoints on which the control will be discretized.
  • t_mid – get a particular midpoint.
  • discretize – discretize directly on tlist instead of on the midpoints
source
QuantumPropagators.Controls.evaluate!Method

Update an existing evaluation of a generator.

evaluate!(op, generator, args..; vals_dict=IdDict())

performs an in-place update on an op the was obtained from a previous call to evaluate with the same generator, but for a different point in time and/or different values in vals_dict.

source
QuantumPropagators.Controls.evaluateMethod

Evaluate all controls.

In general, evaluate(object, args...; vals_dict=IdDict()) evaluates the object for a specific point in time indicated by the positional args. Any control in object is evaluated at the specified point in time. Alternatively, the vals_dict maps a controls to value ("plug in this value for the given control")

For example,

op = evaluate(generator, t)

evaluates generator at time t. This requires that any control in generator is a callable that takes t as a single argument.

op = evaluate(generator, tlist, n)

evaluates generator for the n'th interval of tlist. This uses the definitions for the midpoints in discretize_on_midpoints. The controls in generator may be vectors (see discretize, discretize_on_midpoints) or callables of t.

op = evaluate(generator, t; vals_dict)
+midpoints index:   1     2   3     4

We will have $p₁=c₁$ for the first value, $p₄=c₅$ for the last value. For all other points, the control values $cᵢ = \frac{p_{i-1} + p_{i}}{2}$ are the average of the values on the midpoints. This implies the "un-averaging" for the midpoint values $pᵢ = 2 c_{i} - p_{i-1}$.

Note

An arbitrary input control array may not be compatible with the above averaging formula. In this case, the conversion will be "lossy" (discretize will not recover the original control array; the difference should be considered a "discretization error"). However, any further round-trip conversions between points and intervals are bijective and preserve the boundary conditions. In this case, the discretize_on_midpoints and discretize methods are each other's inverse. This also implies that for an optimal control procedure, it is safe to modify midpoint values. Modifying the the values on the time grid directly on the other hand may accumulate discretization errors.

If control is a vector of one less length than tlist, a copy of control will be returned, under the assumption that the input is already properly discretized.

If control is a function, the function will be directly evaluated at the midpoints marked as x in the above diagram..

See also

  • get_tlist_midpoints – get all the midpoints on which the control will be discretized.
  • t_mid – get a particular midpoint.
  • discretize – discretize directly on tlist instead of on the midpoints
source
QuantumPropagators.Controls.evaluate!Method

Update an existing evaluation of a generator.

evaluate!(op, generator, args..; vals_dict=IdDict())

performs an in-place update on an op the was obtained from a previous call to evaluate with the same generator, but for a different point in time and/or different values in vals_dict.

source
QuantumPropagators.Controls.evaluateMethod

Evaluate all controls.

In general, evaluate(object, args...; vals_dict=IdDict()) evaluates the object for a specific point in time indicated by the positional args. Any control in object is evaluated at the specified point in time. Alternatively, the vals_dict maps a controls to value ("plug in this value for the given control")

For example,

op = evaluate(generator, t)

evaluates generator at time t. This requires that any control in generator is a callable that takes t as a single argument.

op = evaluate(generator, tlist, n)

evaluates generator for the n'th interval of tlist. This uses the definitions for the midpoints in discretize_on_midpoints. The controls in generator may be vectors (see discretize, discretize_on_midpoints) or callables of t.

op = evaluate(generator, t; vals_dict)
 op = evaluate(generator, tlist, n; vals_dict)

resolves any explicit time dependencies in generator at the specified point in time, but uses the value in the given vals_dict for any control in vals_dict.

a = evaluate(ampl, tlist, n; vals_dict=IdDict())
-a = evaluate(ampl, t; vals_dict=IdDict())

evaluates a control amplitude to a scalar by evaluating any explicit time dependency, and by replacing each control with the corresponding value in vals_dict.

Calling evaluate for an object with no implicit or explicit time dependence should return the object unchanged.

For generators without any explicit time dependence,

op = evaluate(generator; vals_dict)

can be used. The vals_dict in this case must contain values for all controls in generator.

See also:

  • evaluate! — update an existing operator with a re-evaluation of a

generator at a different point in time.

source
QuantumPropagators.Controls.get_controlsMethod
get_controls(operator)

for a static operator (matrix) returns an empty tuple.

source
QuantumPropagators.Controls.get_controlsMethod

Extract a Tuple of controls.

controls = get_controls(generator)

extracts the controls from a single dynamical generator.

For example, if generator = hamiltonian(H0, (H1, ϵ1), (H2, ϵ2)), extracts (ϵ1, ϵ2).

source
QuantumPropagators.Controls.get_parametersMethod

Obtain analytic parameters of the given control.

parameters = get_parameters(control)

obtains parameters as an AbstractVector{Float64} containing any tunable analytic parameters associated with the control. The specific type of parameters depends on how control is defined, but a ComponentArrays.ComponentVector should be a common array type.

Mutating the resulting vector must directly affect the control in any subsequent call to evaluate. That is, the values in parameters must alias values inside the control.

Note that the control must be an object specifically designed to have analytic parameters. Typically, it should be implemented as a subtype of ParameterizedFunction. For a simple function ϵ(t) or a vector of pulse values, which are the default types of controls discussed in the documentation of hamiltonian, the get_parameters function will return an empty vector.

More generally,

parameters = get_parameters(object)

collects and combines all unique parameter arrays from the controls inside the object. The object may be a Generator, Trajectory, ControlProblem, or any other object for which get_controls(object) is defined. If there are multiple controls with different parameter arrays, these are combined in a RecursiveArrayTools.ArrayPartition. This requires the RecursiveArrayTools package to be loaded. Again, mutating parameters directly affects the underlying controls.

The parameters may be used as part of the parameters attribute of a propagator for time-continuous dynamics, like a general ODE solver, or in an optimization that tunes analytic control parameters, e.g., with a Nelder-Mead method. Examples might include the widths, peak amplitudes, and times of a superposition of Gaussians [9], cf. the example of a ParameterizedFunction, or the amplitudes associated with spectral components in a random truncated basis [10].

The parameters are not intended for optimization methods such as GRAPE or Krotov that fundamentally use a piecewise-constant control ansatz. In the context of such methods, the "control parameters" are always the amplitudes of the control at the mid-points of the time grid, as obtained by discretize_on_midpoints, and get_parameters is ignored.

source
QuantumPropagators.Controls.get_tlist_midpointsMethod

Shift time grid values to the interval midpoints

tlist_midpoints = get_tlist_midpoints(
+a = evaluate(ampl, t; vals_dict=IdDict())

evaluates a control amplitude to a scalar by evaluating any explicit time dependency, and by replacing each control with the corresponding value in vals_dict.

Calling evaluate for an object with no implicit or explicit time dependence should return the object unchanged.

For generators without any explicit time dependence,

op = evaluate(generator; vals_dict)

can be used. The vals_dict in this case must contain values for all controls in generator.

See also:

  • evaluate! — update an existing operator with a re-evaluation of a

generator at a different point in time.

source
QuantumPropagators.Controls.get_controlsMethod
get_controls(operator)

for a static operator (matrix) returns an empty tuple.

source
QuantumPropagators.Controls.get_controlsMethod

Extract a Tuple of controls.

controls = get_controls(generator)

extracts the controls from a single dynamical generator.

For example, if generator = hamiltonian(H0, (H1, ϵ1), (H2, ϵ2)), extracts (ϵ1, ϵ2).

source
QuantumPropagators.Controls.get_parametersMethod

Obtain analytic parameters of the given control.

parameters = get_parameters(control)

obtains parameters as an AbstractVector{Float64} containing any tunable analytic parameters associated with the control. The specific type of parameters depends on how control is defined, but a ComponentArrays.ComponentVector should be a common array type.

Mutating the resulting vector must directly affect the control in any subsequent call to evaluate. That is, the values in parameters must alias values inside the control.

Note that the control must be an object specifically designed to have analytic parameters. Typically, it should be implemented as a subtype of ParameterizedFunction. For a simple function ϵ(t) or a vector of pulse values, which are the default types of controls discussed in the documentation of hamiltonian, the get_parameters function will return an empty vector.

More generally,

parameters = get_parameters(object)

collects and combines all unique parameter arrays from the controls inside the object. The object may be a Generator, Trajectory, ControlProblem, or any other object for which get_controls(object) is defined. If there are multiple controls with different parameter arrays, these are combined in a RecursiveArrayTools.ArrayPartition. This requires the RecursiveArrayTools package to be loaded. Again, mutating parameters directly affects the underlying controls.

The parameters may be used as part of the parameters attribute of a propagator for time-continuous dynamics, like a general ODE solver, or in an optimization that tunes analytic control parameters, e.g., with a Nelder-Mead method. Examples might include the widths, peak amplitudes, and times of a superposition of Gaussians [9], cf. the example of a ParameterizedFunction, or the amplitudes associated with spectral components in a random truncated basis [10].

The parameters are not intended for optimization methods such as GRAPE or Krotov that fundamentally use a piecewise-constant control ansatz. In the context of such methods, the "control parameters" are always the amplitudes of the control at the mid-points of the time grid, as obtained by discretize_on_midpoints, and get_parameters is ignored.

source
QuantumPropagators.Controls.get_tlist_midpointsMethod

Shift time grid values to the interval midpoints

tlist_midpoints = get_tlist_midpoints(
     tlist; preserve_start=true, preserve_end=true
-)

takes a vector tlist of length $n$ and returns a Vector{Float64} of length $n-1$ containing the midpoint values of each interval. The intervals in tlist are not required to be uniform.

By default, the first and last point of tlist is preserved, see discretize_on_midpoints. This behavior can be disabled by passing preserve_start and preserve_end as false in order to use the midpoints of the first and last interval, respectively.

See also

  • t_mid – get a particular midpoint.
source
QuantumPropagators.Controls.substituteMethod

Substitute inside the given object.

object = substitute(object, replacements)

returns a modified object with the replacements defined in the given replacements dictionary. Things that can be replaced include operators, controls, and amplitudes. For example,

generator = substitute(generator::Generator, replacements)
+)

takes a vector tlist of length $n$ and returns a Vector{Float64} of length $n-1$ containing the midpoint values of each interval. The intervals in tlist are not required to be uniform.

By default, the first and last point of tlist is preserved, see discretize_on_midpoints. This behavior can be disabled by passing preserve_start and preserve_end as false in order to use the midpoints of the first and last interval, respectively.

See also

  • t_mid – get a particular midpoint.
source
QuantumPropagators.Controls.substituteMethod

Substitute inside the given object.

object = substitute(object, replacements)

returns a modified object with the replacements defined in the given replacements dictionary. Things that can be replaced include operators, controls, and amplitudes. For example,

generator = substitute(generator::Generator, replacements)
 operator = substitute(operator::Operator, replacements)
-amplitude = substitute(amplitude, controls_replacements)

Note that substitute cannot be used to replace dynamic quantities, e.g. controls, with static value. Use evaluate instead for that purpose.

source
QuantumPropagators.Controls.t_midMethod

Midpoint of n'th interval of tlist.

t = t_mid(tlist, n)

returns the t that is the midpoint between points tlist[n+1] and tlist[n], but snapping to the beginning/end to follow the convention explained in discretize_on_midpoints (to preserve exact boundary conditions at the edges of the time grid.)

See also

source
QuantumPropagators.Storage.get_from_storage!Method

Obtain data from storage.

get_from_storage!(data, storage, i)

extracts data from the storage for the i'th time slot. Inverse of write_to_storage!. This modifies data in-place. If get_from_storage! is implemented for arbitrary observables, it is the developer's responsibility that init_storage, write_to_storage!, and get_from_storage! are compatible.

To extract immutable data, the non-in-place version

data = get_from_storage(storage, i)

can be used.

source
QuantumPropagators.Storage.get_from_storageMethod

Obtain immutable data from storage.

data = get_from_storage(storage, i)

See get_from_storage!.

source
QuantumPropagators.Storage.init_storageMethod

Create a storage array for propagation.

storage = init_storage(state, tlist)

creates a storage array suitable for storing a state for each point in tlist.

storage = init_storage(state, tlist, observables)

creates a storage array suitable for the data generated by the observables applied to state, see map_observables, for each point in tlist.

storage = init_storage(data, nt)

creates a storage arrays suitable for storing data nt times, where nt=length(tlist). By default, this will be a vector of typeof(data) and length nt, or a n × nt Matrix with the same eltype as data if data is a Vector of length n.

source
QuantumPropagators.Storage.map_observableMethod

Apply a single observable to state.

data = map_observable(observable, tlist, i, state)

By default, observable can be one of the following:

  • A function taking the three arguments state, tlist, i, where state is defined at time tlist[i].
  • A function taking a single argument state, under the assumption that the observable is time-independent
  • A matrix for which to calculate the expectation value with respect to the vector state.

The default map_observables delegates to this function.

source
QuantumPropagators.Storage.map_observablesMethod

Obtain "observable" data from state.

data = map_observables(observables, tlist, i, state)

calculates the data for a tuple of observables applied to state defined at time tlist[i]. For a single observable (tuple of length 1), simply return the result of map_observable.

For multiple observables, return the tuple resulting from applying map_observable for each observable. If the tuple is "uniform" (all elements are of the same type, e.g. if each observable calculates the expectation value of a Hermitian operator), it is converted to a Vector. This allows for compact storage in a storage array, see init_storage.

source
QuantumPropagators.Storage.write_to_storage!Method

Place data into storage for time slot i.

write_to_storage!(storage, i, data)

for a storage array created by init_storage stores the data obtained from map_observables at time slot i.

Conceptually, this corresponds roughly to storage[i] = data, but storage may have its own idea on how to store data for a specific time slot. For example, with the default init_storage Vector data will be stored in a matrix, and write_to_storage! will in this case write data to the i'th column of the matrix.

For a given type of storage and data, it is the developer's responsibility that init_storage and write_to_storage! are compatible.

source
QuantumPropagators.SpectralRange.random_stateMethod

Random normalized quantum state.

    Ψ = random_state(H; rng=Random.GLOBAL_RNG)

returns a random normalized state compatible with the Hamiltonian H. This is intended to provide a starting vector for estimating the spectral radius of H via an Arnoldi method.

source
QuantumPropagators.SpectralRange.ritzvalsFunction

Calculate a vector for Ritz values converged to a given precision.

R = ritzvals(G, state, m_min, m_max=2*m_min; prec=1e-5, norm_min=1e-15)

calculates a complex vector R of at least m_min (assuming a sufficient Krylov dimension) and at most m_max Ritz values.

source
QuantumPropagators.SpectralRange.specrangeMethod
E_min, E_max = specrange(
+amplitude = substitute(amplitude, controls_replacements)

Note that substitute cannot be used to replace dynamic quantities, e.g. controls, with static value. Use evaluate instead for that purpose.

source
QuantumPropagators.Controls.t_midMethod

Midpoint of n'th interval of tlist.

t = t_mid(tlist, n)

returns the t that is the midpoint between points tlist[n+1] and tlist[n], but snapping to the beginning/end to follow the convention explained in discretize_on_midpoints (to preserve exact boundary conditions at the edges of the time grid.)

See also

source
QuantumPropagators.Storage.get_from_storage!Method

Obtain data from storage.

get_from_storage!(data, storage, i)

extracts data from the storage for the i'th time slot. Inverse of write_to_storage!. This modifies data in-place. If get_from_storage! is implemented for arbitrary observables, it is the developer's responsibility that init_storage, write_to_storage!, and get_from_storage! are compatible.

To extract immutable data, the non-in-place version

data = get_from_storage(storage, i)

can be used.

source
QuantumPropagators.Storage.get_from_storageMethod

Obtain immutable data from storage.

data = get_from_storage(storage, i)

See get_from_storage!.

source
QuantumPropagators.Storage.init_storageMethod

Create a storage array for propagation.

storage = init_storage(state, tlist)

creates a storage array suitable for storing a state for each point in tlist.

storage = init_storage(state, tlist, observables)

creates a storage array suitable for the data generated by the observables applied to state, see map_observables, for each point in tlist.

storage = init_storage(data, nt)

creates a storage arrays suitable for storing data nt times, where nt=length(tlist). By default, this will be a vector of typeof(data) and length nt, or a n × nt Matrix with the same eltype as data if data is a Vector of length n.

source
QuantumPropagators.Storage.map_observableMethod

Apply a single observable to state.

data = map_observable(observable, tlist, i, state)

By default, observable can be one of the following:

  • A function taking the three arguments state, tlist, i, where state is defined at time tlist[i].
  • A function taking a single argument state, under the assumption that the observable is time-independent
  • A matrix for which to calculate the expectation value with respect to the vector state.

The default map_observables delegates to this function.

source
QuantumPropagators.Storage.map_observablesMethod

Obtain "observable" data from state.

data = map_observables(observables, tlist, i, state)

calculates the data for a tuple of observables applied to state defined at time tlist[i]. For a single observable (tuple of length 1), simply return the result of map_observable.

For multiple observables, return the tuple resulting from applying map_observable for each observable. If the tuple is "uniform" (all elements are of the same type, e.g. if each observable calculates the expectation value of a Hermitian operator), it is converted to a Vector. This allows for compact storage in a storage array, see init_storage.

source
QuantumPropagators.Storage.write_to_storage!Method

Place data into storage for time slot i.

write_to_storage!(storage, i, data)

for a storage array created by init_storage stores the data obtained from map_observables at time slot i.

Conceptually, this corresponds roughly to storage[i] = data, but storage may have its own idea on how to store data for a specific time slot. For example, with the default init_storage Vector data will be stored in a matrix, and write_to_storage! will in this case write data to the i'th column of the matrix.

For a given type of storage and data, it is the developer's responsibility that init_storage and write_to_storage! are compatible.

source
QuantumPropagators.SpectralRange.random_stateMethod

Random normalized quantum state.

    Ψ = random_state(H; rng=Random.GLOBAL_RNG)

returns a random normalized state compatible with the Hamiltonian H. This is intended to provide a starting vector for estimating the spectral radius of H via an Arnoldi method.

source
QuantumPropagators.SpectralRange.ritzvalsFunction

Calculate a vector for Ritz values converged to a given precision.

R = ritzvals(G, state, m_min, m_max=2*m_min; prec=1e-5, norm_min=1e-15)

calculates a complex vector R of at least m_min (assuming a sufficient Krylov dimension) and at most m_max Ritz values.

source
QuantumPropagators.SpectralRange.specrangeMethod
E_min, E_max = specrange(
     H, :arnoldi;
     rng=Random.GLOBAL_RNG,
     state=random_state(H; rng),
@@ -266,5 +266,5 @@
     prec=1e-3,
     norm_min=1e-15,
     enlarge=true
-)

uses Arnoldi iteration with state as the starting vector. It approximates the eigenvalues of H with between m_min and m_max Ritz values, until the lowest and highest eigenvalue are stable to a relative precision of prec. The norm_min parameter is passed to the underlying arnoldi!.

If enlarge=true (default) the returned E_min and E_max will be enlarged via a heuristic to slightly over-estimate the spectral radius instead of under-estimating it.

source
QuantumPropagators.SpectralRange.specrangeMethod
E_min, E_max = specrange(H, :diag)

uses exact diagonization via the standard eigvals function to obtain the smallest and largest eigenvalue. This should only be used for relatively small matrices.

source
QuantumPropagators.SpectralRange.specrangeMethod
E_min, E_max = specrange(H, :manual; E_min, E_max)

directly returns the given E_min and E_max without considering H.

source
QuantumPropagators.SpectralRange.specrangeMethod

Calculate the spectral range of a Hamiltonian H on the real axis.

E_min, E_max = specrange(H; method=:auto, kwargs...)

calculates the approximate lowest and highest eigenvalues of H. Any imaginary part in the eigenvalues is ignored: the routine is intended for (although not strictly limited to) a Hermitian H.

This delegates to

specrange(H, method; kwargs...)

for the different methods.

The default method=:auto chooses the best method for the given H. This is :diag for small matrices, and :arnoldi otherwise. If both E_min and E_max are given in the kwargs, those will be returned directly (method=:manual).

Keyword arguments not relevant to the underlying implementation will be ignored.

source
QuantumPropagators.Newton.NewtonWrkType
NewtonWrk(v0, m_max=10)

Workspace for the Newton-with-restarted-Arnoldi propagation routine.

Initializes the workspace for the propagation of a vector v0, using a maximum Krylov dimension of m_max in each restart iteration. Note that m_max should be smaller than the length of v0.

source
QuantumPropagators.Newton.extend_leja!Method
extend_leja!(leja, n, newpoints, n_use)

Given an array of n (ordered) Leja points, extract n_use points from newpoints, and append them to the existing Leja points. The array leja should be sufficiently large to hold the new Leja points, which are appended after index n_old. It will be re-allocated if necessary and may have a size of up to 2*(n+n_use).

Arguments

  • leja: Array of leja values. Must contain the "old" leja values to be kept in leja(0:n-1). On output, n_use new leja points will be in leja(n+:n+n_use-1), for the original value of n. The leja array must use zero-based indexing.
  • n: On input, number of "old" leja points in leja. On output, total number of leja points (i.e. n=n+n_use)
  • newpoints: On input, candidate points for new leja points. The n_use best values will be chosen and added to leja. On output, the values of new_points are undefined.
  • n_use: Number of points that should be added to leja
source
QuantumPropagators.Newton.extend_newton_coeffs!Method
extend_newton_coeffs!(a, n_a, leja, func, n_leja, radius)

Extend the array a of existing Newton coefficients for the expansion of the func from n_a coefficients to n_leja coefficients. Return a new value n_a=n_a+n_leja with the total number of Newton coefficients in the updated a.

Arguments

  • a: On input, a zero-based array of length n_a or greater, containing Newton coefficients. On output, array containing a total n_leja coefficients. The array a will be resized if necessary, and may have a length greater than n_leja on output
  • n_a: The number of Newton coefficients in a, on input. Elements of a beyond the first n_a elements will be overwritten.
  • leja: Array of normalized Leja points, containing at least n_leja elements.
  • func: Function for which to calculate Newton coefficients
  • n_leja: The number of elements in leja to use for calculating new coefficients, and the total number of Newton coefficients on output
  • radius: Normalization radius for divided differences
source
QuantumPropagators.Newton.newton!Method
newton!(Ψ, H, dt, wrk; func=(z -> exp(-1im*z)), norm_min=1e-14, relerr=1e-12,
-        max_restarts=50, _...)

Evaluate Ψ = func(H*dt) Ψ using a Newton-with-restarted-Arnoldi scheme.

Arguments

  • Ψ: The state to propagate, will be overwritten in-place with the propagated state
  • H: Operator acting on Ψ. Together with dt, this is the argument to func
  • dt: Implicit time step. Together with H, this is the argument to func
  • wkr: Work array, initialized with NewtonWrk
  • func: The function to apply to H dt, taking a single (scalar) complex-valued argument z in place of H dt. The default func is to evaluate the time evaluations operator for the Schrödinger equation
  • norm_min: the minimum norm at which to consider a state similar to Ψ as zero
  • relerr: The relative error defining the convergence condition for the restart iteration. Propagation stops when the norm of the accumulated Ψ is stable up to the given relative error
  • max_restarts: The maximum number of restart iterations. Exceeding max_restarts will throw an AssertionError.

All other keyword arguments are ignored.

source
QuantumPropagators.Cheby.ChebyWrkType

Workspace for the Chebychev propagation routine.

ChebyWrk(Ψ, Δ, E_min, dt; limit=1e-12)

initializes the workspace for the propagation of a state similar to Ψ under a Hamiltonian with eigenvalues between E_min and E_min + Δ, and a time step dt. Chebychev coefficients smaller than the given limit are discarded.

source
QuantumPropagators.Cheby.cheby!Method

Evaluate Ψ = exp(-𝕚 * H * dt) Ψ in-place.

cheby!(Ψ, H, dt, wrk; E_min=nothing, check_normalization=false)

Arguments

  • Ψ: on input, initial vector. Will be overwritten with result.
  • H: Hermitian operator
  • dt: time step
  • wrk: internal workspace
  • E_min: minimum eigenvalue of H, to be used instead of the E_min from the initialization of wrk. The same wrk may be used for different values E_min, as long as the spectra radius Δ and the time step dt are the same as those used for the initialization of wrk.
  • check_normalizataion: perform checks that the H does not exceed the spectral radius for which the workspace was initialized.

The routine will not allocate any internal storage. This implementation requires copyto! lmul!, and axpy! to be implemented for Ψ, and the three-argument mul! for Ψ and H.

source
QuantumPropagators.Cheby.chebyMethod

Evaluate Ψ = exp(-𝕚 * H * dt) Ψ.

Ψ_out = cheby(Ψ, H, dt, wrk; E_min=nothing, check_normalization=false)

acts like cheby! but does not modify Ψ in-place.

source
QuantumPropagators.Cheby.cheby_coeffs!Function

Calculate Chebychev coefficients in-place.

n::Int = cheby_coeffs!(coeffs, Δ, dt, limit=1e-12)

overwrites the first n values in coeffs with new coefficients larger than limit for the given new spectral radius Δ and time step dt. The coeffs array will be resized if necessary, and may have a length > n on exit.

See also cheby_coeffs for an non-in-place version.

source
QuantumPropagators.Cheby.cheby_coeffsMethod

Calculate Chebychev coefficients.

a::Vector{Float64} = cheby_coeffs(Δ, dt; limit=1e-12)

return an array of coefficients larger than limit.

Arguments

  • Δ: the spectral radius of the underlying operator
  • dt: the time step

See also cheby_coeffs! for an in-place version.

source
+)

uses Arnoldi iteration with state as the starting vector. It approximates the eigenvalues of H with between m_min and m_max Ritz values, until the lowest and highest eigenvalue are stable to a relative precision of prec. The norm_min parameter is passed to the underlying arnoldi!.

If enlarge=true (default) the returned E_min and E_max will be enlarged via a heuristic to slightly over-estimate the spectral radius instead of under-estimating it.

source
QuantumPropagators.SpectralRange.specrangeMethod
E_min, E_max = specrange(H, :diag)

uses exact diagonization via the standard eigvals function to obtain the smallest and largest eigenvalue. This should only be used for relatively small matrices.

source
QuantumPropagators.SpectralRange.specrangeMethod
E_min, E_max = specrange(H, :manual; E_min, E_max)

directly returns the given E_min and E_max without considering H.

source
QuantumPropagators.SpectralRange.specrangeMethod

Calculate the spectral range of a Hamiltonian H on the real axis.

E_min, E_max = specrange(H; method=:auto, kwargs...)

calculates the approximate lowest and highest eigenvalues of H. Any imaginary part in the eigenvalues is ignored: the routine is intended for (although not strictly limited to) a Hermitian H.

This delegates to

specrange(H, method; kwargs...)

for the different methods.

The default method=:auto chooses the best method for the given H. This is :diag for small matrices, and :arnoldi otherwise. If both E_min and E_max are given in the kwargs, those will be returned directly (method=:manual).

Keyword arguments not relevant to the underlying implementation will be ignored.

source
QuantumPropagators.Newton.NewtonWrkType
NewtonWrk(v0, m_max=10)

Workspace for the Newton-with-restarted-Arnoldi propagation routine.

Initializes the workspace for the propagation of a vector v0, using a maximum Krylov dimension of m_max in each restart iteration. Note that m_max should be smaller than the length of v0.

source
QuantumPropagators.Newton.extend_leja!Method
extend_leja!(leja, n, newpoints, n_use)

Given an array of n (ordered) Leja points, extract n_use points from newpoints, and append them to the existing Leja points. The array leja should be sufficiently large to hold the new Leja points, which are appended after index n_old. It will be re-allocated if necessary and may have a size of up to 2*(n+n_use).

Arguments

  • leja: Array of leja values. Must contain the "old" leja values to be kept in leja(0:n-1). On output, n_use new leja points will be in leja(n+:n+n_use-1), for the original value of n. The leja array must use zero-based indexing.
  • n: On input, number of "old" leja points in leja. On output, total number of leja points (i.e. n=n+n_use)
  • newpoints: On input, candidate points for new leja points. The n_use best values will be chosen and added to leja. On output, the values of new_points are undefined.
  • n_use: Number of points that should be added to leja
source
QuantumPropagators.Newton.extend_newton_coeffs!Method
extend_newton_coeffs!(a, n_a, leja, func, n_leja, radius)

Extend the array a of existing Newton coefficients for the expansion of the func from n_a coefficients to n_leja coefficients. Return a new value n_a=n_a+n_leja with the total number of Newton coefficients in the updated a.

Arguments

  • a: On input, a zero-based array of length n_a or greater, containing Newton coefficients. On output, array containing a total n_leja coefficients. The array a will be resized if necessary, and may have a length greater than n_leja on output
  • n_a: The number of Newton coefficients in a, on input. Elements of a beyond the first n_a elements will be overwritten.
  • leja: Array of normalized Leja points, containing at least n_leja elements.
  • func: Function for which to calculate Newton coefficients
  • n_leja: The number of elements in leja to use for calculating new coefficients, and the total number of Newton coefficients on output
  • radius: Normalization radius for divided differences
source
QuantumPropagators.Newton.newton!Method
newton!(Ψ, H, dt, wrk; func=(z -> exp(-1im*z)), norm_min=1e-14, relerr=1e-12,
+        max_restarts=50, _...)

Evaluate Ψ = func(H*dt) Ψ using a Newton-with-restarted-Arnoldi scheme.

Arguments

  • Ψ: The state to propagate, will be overwritten in-place with the propagated state
  • H: Operator acting on Ψ. Together with dt, this is the argument to func
  • dt: Implicit time step. Together with H, this is the argument to func
  • wkr: Work array, initialized with NewtonWrk
  • func: The function to apply to H dt, taking a single (scalar) complex-valued argument z in place of H dt. The default func is to evaluate the time evaluations operator for the Schrödinger equation
  • norm_min: the minimum norm at which to consider a state similar to Ψ as zero
  • relerr: The relative error defining the convergence condition for the restart iteration. Propagation stops when the norm of the accumulated Ψ is stable up to the given relative error
  • max_restarts: The maximum number of restart iterations. Exceeding max_restarts will throw an AssertionError.

All other keyword arguments are ignored.

source
QuantumPropagators.Cheby.ChebyWrkType

Workspace for the Chebychev propagation routine.

ChebyWrk(Ψ, Δ, E_min, dt; limit=1e-12)

initializes the workspace for the propagation of a state similar to Ψ under a Hamiltonian with eigenvalues between E_min and E_min + Δ, and a time step dt. Chebychev coefficients smaller than the given limit are discarded.

source
QuantumPropagators.Cheby.cheby!Method

Evaluate Ψ = exp(-𝕚 * H * dt) Ψ in-place.

cheby!(Ψ, H, dt, wrk; E_min=nothing, check_normalization=false)

Arguments

  • Ψ: on input, initial vector. Will be overwritten with result.
  • H: Hermitian operator
  • dt: time step
  • wrk: internal workspace
  • E_min: minimum eigenvalue of H, to be used instead of the E_min from the initialization of wrk. The same wrk may be used for different values E_min, as long as the spectra radius Δ and the time step dt are the same as those used for the initialization of wrk.
  • check_normalizataion: perform checks that the H does not exceed the spectral radius for which the workspace was initialized.

The routine will not allocate any internal storage. This implementation requires copyto! lmul!, and axpy! to be implemented for Ψ, and the three-argument mul! for Ψ and H.

source
QuantumPropagators.Cheby.chebyMethod

Evaluate Ψ = exp(-𝕚 * H * dt) Ψ.

Ψ_out = cheby(Ψ, H, dt, wrk; E_min=nothing, check_normalization=false)

acts like cheby! but does not modify Ψ in-place.

source
QuantumPropagators.Cheby.cheby_coeffs!Function

Calculate Chebychev coefficients in-place.

n::Int = cheby_coeffs!(coeffs, Δ, dt, limit=1e-12)

overwrites the first n values in coeffs with new coefficients larger than limit for the given new spectral radius Δ and time step dt. The coeffs array will be resized if necessary, and may have a length > n on exit.

See also cheby_coeffs for an non-in-place version.

source
QuantumPropagators.Cheby.cheby_coeffsMethod

Calculate Chebychev coefficients.

a::Vector{Float64} = cheby_coeffs(Δ, dt; limit=1e-12)

return an array of coefficients larger than limit.

Arguments

  • Δ: the spectral radius of the underlying operator
  • dt: the time step

See also cheby_coeffs! for an in-place version.

source
diff --git a/dev/index.html b/dev/index.html index 9eca660..a65e406 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · Krotov.jl
+Home · Krotov.jl
diff --git a/dev/objects.inv b/dev/objects.inv index 3c8b765..42f4f5c 100644 Binary files a/dev/objects.inv and b/dev/objects.inv differ diff --git a/dev/overview/index.html b/dev/overview/index.html index 15da21c..0abcf8d 100644 --- a/dev/overview/index.html +++ b/dev/overview/index.html @@ -1,2 +1,2 @@ -Overview · Krotov.jl
+Overview · Krotov.jl
diff --git a/dev/references/index.html b/dev/references/index.html index 7dab876..9694d03 100644 --- a/dev/references/index.html +++ b/dev/references/index.html @@ -1,2 +1,2 @@ -References · Krotov.jl

References

[1]
V. F. Krotov. Global Methods in Optimal Control (Dekker, New York, NY, USA, 1996).
[2]
J. Somlói, V. A. Kazakov and D. J. Tannor. Controlled dissociation of I$_2$ via optical transitions between the X and B electronic states. Chem. Phys. 172, 85 (1993).
[3]
A. Bartana, R. Kosloff and D. J. Tannor. Laser cooling of internal degrees of freedom. II. J. Chem. Phys. 106, 1435 (1997).
[4]
J. P. Palao and R. Kosloff. Optimal control theory for unitary transformations. Phys. Rev. A 68, 062308 (2003).
[5]
D. M. Reich, M. Ndong and C. P. Koch. Monotonically convergent optimization in quantum control using Krotov's method. J. Chem. Phys. 136, 104103 (2012).
[6]
M. H. Goerz, D. Basilewitsch, F. Gago-Encinas, M. G. Krauss, K. P. Horn, D. M. Reich and C. P. Koch. Krotov: A Python implementation of Krotov's method for quantum optimal control. SciPost Phys. 7, 080 (2019).
[7]
M. H. Goerz, S. C. Carrasco and V. S. Malinovsky. Quantum Optimal Control via Semi-Automatic Differentiation. Quantum 6, 871 (2022).
[8]
M. H. Goerz, D. M. Reich and C. P. Koch. Optimal control theory for a unitary operation under dissipative evolution. New J. Phys. 16, 055012 (2014).
[9]
S. Machnes, E. Assémat, D. Tannor and F. K. Wilhelm. Tunable, Flexible, and Efficient Optimization of Control Pulses for Practical Qubits. Phys. Rev. Lett. 120, 150401 (2018).
[10]
T. Caneva, T. Calarco and S. Montangero. Chopped random-basis quantum optimization. Phys. Rev. A 84, 022326 (2011).
+References · Krotov.jl

References

[1]
V. F. Krotov. Global Methods in Optimal Control (Dekker, New York, NY, USA, 1996).
[2]
J. Somlói, V. A. Kazakov and D. J. Tannor. Controlled dissociation of I$_2$ via optical transitions between the X and B electronic states. Chem. Phys. 172, 85 (1993).
[3]
A. Bartana, R. Kosloff and D. J. Tannor. Laser cooling of internal degrees of freedom. II. J. Chem. Phys. 106, 1435 (1997).
[4]
J. P. Palao and R. Kosloff. Optimal control theory for unitary transformations. Phys. Rev. A 68, 062308 (2003).
[5]
D. M. Reich, M. Ndong and C. P. Koch. Monotonically convergent optimization in quantum control using Krotov's method. J. Chem. Phys. 136, 104103 (2012).
[6]
M. H. Goerz, D. Basilewitsch, F. Gago-Encinas, M. G. Krauss, K. P. Horn, D. M. Reich and C. P. Koch. Krotov: A Python implementation of Krotov's method for quantum optimal control. SciPost Phys. 7, 080 (2019).
[7]
M. H. Goerz, S. C. Carrasco and V. S. Malinovsky. Quantum Optimal Control via Semi-Automatic Differentiation. Quantum 6, 871 (2022).
[8]
M. H. Goerz, D. M. Reich and C. P. Koch. Optimal control theory for a unitary operation under dissipative evolution. New J. Phys. 16, 055012 (2014).
[9]
S. Machnes, E. Assémat, D. Tannor and F. K. Wilhelm. Tunable, Flexible, and Efficient Optimization of Control Pulses for Practical Qubits. Phys. Rev. Lett. 120, 150401 (2018).
[10]
T. Caneva, T. Calarco and S. Montangero. Chopped random-basis quantum optimization. Phys. Rev. A 84, 022326 (2011).