george h. fisher space sciences laboratory university of california, berkeley

30
The Use of Vector Magnetogram Data in MHD Models of the Solar Atmosphere and Prospects for an Assimilative Model George H. Fisher Space Sciences Laboratory University of California, Berkeley (Errors and lack of clarity introduced by Brian T. Welsch)

Upload: avak

Post on 14-Jan-2016

31 views

Category:

Documents


1 download

DESCRIPTION

The Use of Vector Magnetogram Data in MHD Models of the Solar Atmosphere and Prospects for an Assimilative Model. George H. Fisher Space Sciences Laboratory University of California, Berkeley. (Errors and lack of clarity introduced by Brian T. Welsch). - PowerPoint PPT Presentation

TRANSCRIPT

  • The Use of Vector Magnetogram Data in MHD Models of the Solar Atmosphere and Prospects for an Assimilative ModelGeorge H. FisherSpace Sciences LaboratoryUniversity of California, Berkeley(Errors and lack of clarity introduced by Brian T. Welsch)

  • What would an ``Assimilative Model of the solar atmosphere consist of? A time-evolving physical model of the Suns atmosphere, or a portion of the Suns atmosphere, which can be corrected by time-dependent measurements that can be related in some manner to properties of the solar atmosphere. In particular, this means a 3D-MHD model of the Suns atmosphere, from photosphere to corona, that is updated by means of vector magnetograms.

  • What are the most important elements of a physics-based model of the Sun?Nearly all transient phenomena, such as solar-initiated space weather events, are driven by, or strongly affected by, magnetic fields.A fluid treatment (MHD) is reasonable most of the time (except, probably, during solar flares).Magnetic fields thread all layers of the Suns convection zone and atmosphere.Maps of the estimated solar magnetic field (line-of-sight component) can be performed regularly in the photosphere.In the near future, maps of all 3 components of the estimated magnetic field (vector magnetograms) will be taken regularly.Vector magnetograms are essential for determining the free energy available in the solar atmosphere to drive violent phenomena. Without vector magnetograms, solar models are not meaningfully constrained.

  • This Diagram taken from Welch and Bishop (2006) An introduction to the Kalman FilterA is the physical model time-advance operatorH operator relates state variable x to observable zK is the Kalman filterQ is estimated process or model errorR is measurement errorP is estimate of state variable errorSchematic diagram of an assimilative model employing the Kalman filter approach:

  • Needed ingredients for an assimilative (e.g. Kalman filter) model of the solar atmosphere:A reasonably good physical modelMeasurements with a good enough time cadence and accuracy to be usefulA well-understood connection between physical and measured variablesA good understanding of the data and model errorsWhere do we stand with respect to these requirements?

  • 1. What are the minimum requirements for a reasonably good physical model?The model must accommodate the range of conditions from the photosphere, where magnetic fields can be routinely measured, into the corona, where ``space weather events occurThe model must include the dominant terms in the energy equation that apply in the photosphere-corona system. The dominant terms are drastically different in the different parts of the domainThe model must be able to accommodate the wide range of physical and temporal scales from the photosphere to the corona.The model must be able to accommodate vector magnetic field maps, as a time-dependent boundary condition. This is required whether or not the model is truly assimilative!Until recently, no existing models satisfied these requirements.Here is a brief summary of the challenges:

  • Numerical challenges:A dynamic numerical model extending from below the photosphere out into the corona must:

    span a ~ 10 - 15 order of magnitude change in gas density and a thermodynamic transition from the 1 MK corona to the optically thick, cooler layers of the low atmosphere, visible surface, and below;

    resolve a ~ 100 km photospheric pressure scale height while simultaneously following large-scale evolution (we use the Mikic et al. 2005 technique to mitigate the need to resolve the 1 km transition region scale height characteristic of a Spitzer-type conductivity);

    remain highly accurate in the turbulent sub-surface layers, while still employing an effective shock capture scheme to follow and resolve shock fronts in the upper atmosphere

    address the extreme temporal disparity of the combined system

  • The Solar Photosphere Hinode is a Japanese mission developed and launched by ISAS/JAXA, with NAOJ as domestic partner and NASA and STFC (UK) as international partners. It is operated by these agencies in co-operation withESA and NSC (Norway)The solar photosphere is an extremely thin, corrugated, and complex layer, in which the plasma in strong field regions is of order unity. This is the layer in which magnetic fields can be measured most routinely.

  • Movies courtesy of LMSAL, TRACE & LASCO consortiaThe solar corona:

    The corona is a low-density, low-, optically-thin, hot plasma

    Plasma entrained within coronal loops evolves rapidly compared to sub-surface structures

    The magnetically-dominated corona can store energy over long periods of time, but will often undergo sudden, rapid, and dramatic topological changes as magnetic energy is released.

    The size scale of coronal structures is generally much larger than the depth of the photosphere

  • RADMHD (Abbett, 2007, ApJ, in press): Numerical techniques We use a semi-implicit, operator-split method. Explicit sub-step: We use a 3D extension of the semi-discrete method of Kurganov & Levy (2000) with the third order-accurate central weighted essentially non-oscillatory (CWENO) polynomial reconstruction of Levy et al. (2000).

    CWENO interpolation provides an efficient, accurate, simple shock capture scheme that allows us to resolve shocks in the transition region and corona without refining the mesh. The solenoidal constraint on B is enforced implicitly.

  • RADMHD: Numerical techniques We use a semi-implicit, operator-split method Implicit sub-step: We use a Jacobian-free Newton-Krylov (JFNK) solver (see Knoll & Keyes 2003). The Krylov sub-step employs the generalized minimum residual (GMRES) technique.

    JFNK provides a memory-efficient means of implicitly solving a non-linear system, and frees us from the restrictive CFL stability conditions imposed by e.g., the electron thermal conductivity and radiative cooling.

  • Characteristics of the Quiet Sun model atmosphere:Note: Above movie is not a timeseries!

  • We drive RADMHD with photospheric velocities determined from magnetogram sequences. Velocities match tBz determined by induction equation.

    We have spent a lot of effort deriving and evaluating techniques to do this, with our leading techniques being MEF (Longcope 2004) and ILCT (Welsch et al., 2004).

    In addition to its importance for driving the code, these techniques are useful on their own to derive Poynting and helicity fluxes directly from magnetogram observations.

    By itself, this approach is NOT assimilative: the model is driven to match the observed photospheric field; it does not predict the photospheric field.

  • Photospheric velocities determined from magnetogram sequences drive RAMHD.Currently, we have been exploring driving the RADMHD code directly with sequences of vector magnetograms, without using any assimilative techniques. To do this, it is necessary to find the velocity field at the photospheric boundary, such that it is consistent with the vertical component of the magnetic induction equation. We have spent a lot of effort deriving and evaluating techniques to do this, with our leading techniques being MEF (Longcope 2004) and ILCT (Welsch et al., 2004).

    In addition to its importance for driving the code, these techniques are useful on their own to derive Poynting and helicity fluxes directly from magnetogram observations.

  • An example of magnetic evolution in an active regionNOAA AR 8210, May 1 1998 1 day of evolution seen by MDI

  • Local Correlation TrackingCentral idea of LCT schema: find proper motions of features in a pair of successive images are by maximizing a cross-correlation function (or minimizing an error function) between sub-regions of the images. The concept is generally attributed to November & Simon (1988).Useful with G-band filtergrams, H images, or magnetograms.

    The FLCT method (which we developed) is similar. For each pixel, we: mask each image with a Gaussian, of width , centered at that pixel;crop the resulting images, keeping only signficant regions;compute the cross-correlation function between the two cropped images, using standard Fast Fourier Transform (FFT) techniques; use 2nd order Taylor expansion to find the shifts in x and y that maximize the cross-correlation function to sub-arc-second precision; and use the shifts in x and y and t between images to find the intensity features' apparent motion along the solar surface.

  • Example of LCT flows for NOAA AR 8210 (May 1 1998)

  • The Demoulin & Berger (2003) Interpretation of LCTApparent horizontalmotion, ULCT , is from combination of hori-zontal motions and vertical motions acting on non-vertical fields.

  • The Ideal MHD Induction EquationHow can we ensure that estimated velocities are physically consistent with the magnetic induction equation?

    Only the normal component of the induction equation contains no unobservable vertical derivatives:Demoulin & Berger argue tracked motions, U, relate to v via:The ideal MHD induction equation simplifies to this form:

  • MEF & ILCT: constrain solutions of the induction equationSolve for with 2D divergence, tBz = -2 MEF: Minimize dA (v2 + vz 2) to find ILCT: Assume u = uLCT, then solve x uBz = -2. Note that if only Bz (or an approximation to it, BLOS) is known, ILCT can still solve for , !Let uBz = v Bz - vzB = - + x z. ^

  • Apply ILCT to IVM vector magnetogram data for AR 8210Vector magnetic field data enables us to find 3-D flow field from ILCT via the equations shown on the previous slide. Transverse flows are shown as arrows, up/down flows shown as blue/red contours.

  • 2. Measurements of the magnetic field at the photosphereSlide courtesy of Tom Metcalf, CoRA/NWRA

  • How is the vector magnetic field determined?Magnetic fields will be split by Zeeman effect, but using the split itself not useful in most cases.Spot in 5250 A (normal Zeeman triplet)Slide courtesy of Tom Metcalf, CoRA/NWRA

  • Zeeman Effect: Normal Zeeman TripletPi component is unshifted in wavelength (1)Sigma components shifted to either side of pi component (2).

    If the magnetic field is directed along the line of sight, the sigma components are left and right circularly polarized and the pi component is unpolarized.If the magnetic field is directed perpendicular to the line of sight, the sigma and pi components have mutually orthogonal linear polarizations.Slide courtesy of Tom Metcalf, CoRA/NWRA

  • How is Polarization Measured?Polarization is measured as the difference between data obtained using two different polarizers.For example a Wollaston prism or a calcite beam splitter produces two output beams of orthogonal linear polarization: I+Q,I-Q.U and V follow in the same way with a retarder in the path.

    Slide courtesy of Tom Metcalf, CoRA/NWRA

  • The Stokes ProfilesA magnetograph observes the Stokes profiles.V/I is circular polarization and gives the LOS fieldU/I and Q/I are linear polarization and give the transverse fieldSlide courtesy of Tom Metcalf, CoRA/NWRA

  • Observed Stokes ProfilesNa-D line observations from the IVMThey look more or less as expected with a few differences:Noise is clearly presentprefilter distorts spectrumStokes IStokes QStokes UStokes VRelative Wavelength (nm)Slide courtesy of Tom Metcalf, CoRA/NWRA

  • 3. Relationship between observed and measured variables - inverting the polarization observationsWith the polarization in hand, how do we compute the magnetic field?There are a number of methodsDirect measurement of line splittingFitting Stokes profilesWeak field approximationsCalibration constant(s)Different methods actually measure different quantities: the magnetic field or the flux density. Beware the difference!Slide courtesy of Tom Metcalf, CoRA/NWRA

  • 3. Inverting the Polarization Observations to get B The best method is to observe the Zeeman splitting directly.Not generally possible for optical observations since the fields on the Sun are too weak.The Zeeman splitting goes as so this works better in the IR.Gives the magnetic field directly without worrying about the filling factor.The next best method is to fit the Stokes profiles to the Unno profiles (Milne-Eddington atmosphere: source function linear with optical depth).This gives the magnetic field, filling factor, thermodynamic parametersSlide courtesy of Tom Metcalf, CoRA/NWRA

  • The 180 Degree AmbiguityThe observed transverse field is ambiguous by 180 degrees.There are a number of ways to fix this but, as a practical matter, this is most difficult for the most interesting regions and easy for uninteresting regions.Acute angle solutionFast, will fail in complex active regionsMinimum energy solutionVery slow, but more robustCorrespondence with H-alpha fibrilsGenerally accurate, but difficult to automateSlide courtesy of Tom Metcalf, CoRA/NWRA

  • 4. Known sources of error in vector magnetograms:Photon statisticsPolarization is computed as a difference of two signalspolarization cross talkPolarization signal leaks between Stokes parametersIs corrected on an instrument-by instrument basiscalibration constantThe calibration constant in magnetographs is very approximate and is not constant at all.atmospheric seeingWill induce spurious polarization, sometimes strongpolarization biasShould be correctable in most instruments by looking at the continuum or regions of very weak field.bad 180 degree ambiguity resolution (how to quantify??)Bottom line vector magnetogram errors can be characterized, at least statisticallySlide courtesy of Tom Metcalf, CoRA/NWRA

  • Summary of assimilative model requirementsReasonably good physical model good progress!Measurements with good time cadence, accuracy rapidly improving!Well-understood connection between physical variables and measurements reasonably goodUnderstanding of data errors reasonably good; understanding of model errors - unknown

  • Issues that must be resolved for an assimilative solar MHD modelCurrently, the data are used directly to determine the flow-field at the photosphere. How can this be made consistent with the Kalman corrector step, since the data have already been used?

    Can the Kalman filter approach be used in a sub-step process to determine the photospheric velocity field instead of using the ILCT or MEF procedures?

    Noise in the vector magnetogram data will probably introduce spurious Alfven waves into the model, even with the filtering. How do we cope with this?

    How do we estimate model errors? Ensemble runs with Monte Carlod magnetogram errors? (Non-linearity Chaotic response?)

  • ConclusionsDifference between assimilative and models directly driven by data (as we now perform): Assimilative models have the potential to accommodate data errors more consistentlyAssimilative techniques are worth detailed investigation for solar MHD models.There are other, simpler solar models that may be more immediately amenable to assimilation techniques.

  • Alternative data for application of Kalman approach with time-dependent coronal models:Input: photospheric B Output: n, T, B, v Possible data to assimilate:1. Coronal B, from radio magnetography LOS integration, B on voxels unkown2. Helioseismic estimates of sub-photospheric vSpatial resolution quite coarse3. H- fibrils (direction of chromospheric B)Can reveal errors, but how to update models B?4. w/Emissivity model, coronal EUV/SXR emissionLOS integration, unknown heating function

  • Alternative data for Kalman approach, contd:Input: photospheric B Output: n, T, B, v Possible data to assimilate:

    5. Chromosperic BLOS, from, e.g., SOLIS Precise altitude along LOS unknown6. Tomographic density reconstructionStereoscopy from solar rotation too slowDuring STEREO mission, this could work!

    Need to delete about 7 slides. No. 2, This diagram shows the entire process of assimilating measurement data with model predictions for a single time step. In terms of the notation, a subscript k denotes an estimate of a quantity at time step k; subscript k-1 denotes the same quantity at the previous timestep. Quantities with a superscript of minus (-) denote quantities that are updated from the physical model itself. Quantities without the minus superscript denote quantities that have been updated after being corrected for measurements. The physical variables are denoted as x_k; physical variables updated by the model alone are denoted x hat _ k ^ -, and the physical variables updated after the measurement update part of the timestep are denoted x hat _ k. The quantity z_k are the measured variables at timestep k. The quantity P_k are the estimated errors in the physical variables. The meaning of the operators A, H, K are: The physical model operator A advances x_k from timestep k-1 to timestep k; the H operator does the conversion from physical variables to measured variables; the K operator is the Kalman gain operator, and corrects the model-predicted variable x hat _ k ^ - to x hat _ k by operating on the difference between the measurements, and the models predicted values for the measurements. K_k also includes contributions from measurement errors R.

    The sequence of operations can be divided into the time update operations (predict), and the measurement update operations (correct). The sequence goes like this:Run the model forward in time one timestep. Get physical variables x hat _ k ^ - at next timestep. Estimate the error in the physical variables P _k^ - at the new timestep, using estimated errors from previous timestep plus estimates of model or process error Q. Using model estimates x_k ^ - and P _ k ^ -, compute the Kalman gain K_k. K_k can be thought of as an operator that does a least-squares fit between the model and data. Use the Kalman filter operator to update the model computed variables to reflect the measurements at this timestep Use the Kalman filter operator to update the error in physical variables, correcting for the measurements

    Move on to next timestep!

    Movie of 8210 evolving - clearly there are motions, describe them qualitatively. Ask these questions: To what extent is it possible to use sequences of magnetograms and vector magnetograms to determine the 3-d velocity field simply by observing changes in the pattern from a sequence of magneograms?