<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
	    <channel>
        <title>COST1207 - Forum: WG2 - Algorithms</title>
        <link>http://eubrewnet.aemet.es/cost1207/forum/wg2/</link>
        <description><![CDATA[COST Action ESCOST1207]]></description>
        <generator>Simple:Press Version 6.10.11</generator>
        <atom:link href="http://eubrewnet.aemet.es/cost1207/forum/wg2/rss/" rel="self" type="application/rss+xml"/>
		                <item>
                    <title>Javier Lopez-Solano on AOD product</title>
                    <link>http://eubrewnet.aemet.es/cost1207/forum/wg2/aod-product/#p71</link>
                    <category>WG2 - Algorithms</category>
                    <guid isPermaLink="true">http://eubrewnet.aemet.es/cost1207/forum/wg2/aod-product/#p71</guid>
					                        <description><![CDATA[<p>Hi all!</p>
<p>As you already know from the Azores and Edinburgh meetings, within the framework of COST Action 1207 and the WMO-CIMO Testbed for Aerosols and Water Vapor Remote Sensing Instruments (Izaña), at the RBCCE we have been working, in collaboration with other participants of EUBREWNET (Thomas Carlund and Henri Diémoz, to cite just two names), on the development of the AOD algorithm for the network.</p>
<p>We plan to start coding the AOD product in the server very soon. To make things easier, it would be better to have the different levels of the product defined in advance. So, to start somewhere, we have come up with the following:</p>
<pre>
<p><input type='button' class='sfcodeselect' name='sfselectit4991' value='Select Code' data-codeid='sfcode4991' /></p><div class='sfcode' id='sfcode4991'>
* Level 0												
   Taken directly from the Brewer (IOS) program											
												
* Level 1												
   1) Ozone from the L1.5 product, with the standard Brewer Rayleigh correction replaced by the one produced by Bodhaine's coefficients
																				
   2) Corrections to the counts:											
      a) Same as in the ozone: Individual (not summaries!) raw counts with dark counts and dead time corrections, plus ozone L1.5 data filters (these counts do NOT include the standard Brewer corrections for temperature, Rayleigh, and filters)										
      b) AOD specific:	
         i) Temperature correction with absolute temperature coefficients (not available right now, use the relative ones from the ozone configuration)										
	 ii) Filter correction, with spectral attenuation coefficients for each filter										
	 iii) Earth-Sun distance correction																				

   3) AOD calculation (uses an ETC matrix, with one calibration constant for each wavelength and filter)										
      a) Rayleigh correction with the spectral Rayleigh coefficients from Bodhaine's prescription (to start, we will use the climatological pressure as in the ozone, but might change to a reanalysis value at a later date)
      b) Ozone correction with the spectral Ozone absorption coefficients										
												
* Level 1.5											
   1) To the AOD Level 1 product, add the AOD-specific data filters and corrections:											
      a) AOD data filter based on the standard deviation of each group of 5 observation (limit is 0.02, following Gröbner 2004)										
      b) Polarization correction (currently, Cede et al. but may change to Diemoz and Virgilio in the future)										
												
   2) Still to be developed:	
      a) Stray light correction										
      b) Standard lamp correction (can be used somehow to track changes in the AOD configuration?)										
      c) Other corrections and filters										
																			
* Level 2.0												
   1) Ozone from the L2 product																					
   2) AOD configurations validated against Brewer/PFR/AERONET reference
</div>
</pre>
<p>We would like to have your input  -- do you agree with the general layout of the levels? do you miss some correction/filter? if you have experience with other AOD products,how does this compare to it?<br />
and of course anything else you come up with.</p>
<p>We already got some suggestions from Thomas Carlund, which have been already included above, and Stelios Kazadzis, who points that:</p>
<pre>
<p><input type='button' class='sfcodeselect' name='sfselectit4451' value='Select Code' data-codeid='sfcode4451' /></p><div class='sfcode' id='sfcode4451'>
I would put all level 1.5 corrections under level 1 since now level 1 AOD calculation is unusable and AOD has to be recalculated after the steps presented in level 1.5.
SO I would put
L1 output:  only corrected signals
L1.5: final AOD with preliminary calibration (including all corrections)  
L2: AOD with final calibration
The need of a new calibration could be identified some months later than the actual measurement. (see aeronet example below)
 
Another issue is the cloud flagging mentioned in level 1.5.1 a).
It has to be defined if the AOD cloud flagging principles will be the same as the ozone acceptance/rejection principles based on the standard deviation of the group of 5 observations.
Based on the fact that you need the ozone to derive AOD , AOD cloud flagging has to be the same or more strict (e.g. cirrus clouds cases).
 
Since you are mentioning aeronet data.
Level 1.5 data include all corrections and the cloud flagged final AOD product. So someone can use them more or less real time.
Level 2 data are calculated much later when the instrument is re-calibrated and they are the final data.
I think in the Brewer cases if someone wants to follow this would have to finalize everything under level 1.5
and then after x months later when you re-calibrate the instrument and determine new ETC’s, you go back and recalculate all AOD data again as level 2.
 
 
Concerning the ETC part:
“the ETC calibration is a matrix for filter and wavelength”
I would work with converting all brewer counts of all filters to nd 0 by having a conversion function Counts (wl,filter)= f(nd(wv,0)).
I suppose that’s how you have calculated ETCs for all filters.
Using ETCs calculated from filter conversions includes an additional step of the convolution of the ETC to the brewer slit at a specific wavelength and for different nd filiters.
But I think maybe this is a detail or you have calculated the ETCs for different nd filters with some other way.
</div>
</pre>
]]></description>
					                    <pubDate>Mon, 09 Jan 2017 10:15:25 +0000</pubDate>
                </item>
				                <item>
                    <title>tomikarp on Standard Lamp -test</title>
                    <link>http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p61</link>
                    <category>WG2 - Algorithms</category>
                    <guid isPermaLink="true">http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p61</guid>
					                        <description><![CDATA[<p>Hi Tomi,</p>
<p>Fair point. I suspect that this comes down to separating the variability of the lamp and the instrument. The most simple way of treating this is to assume that all the variation is due to an instrument, and that the reference lamp is stable -- as one would hope in the first instance. To go beyond this I guess you need some characterisation of lamp stability via an independent instrument (either in the lab or by a co-located instrument). Perhaps this would give some idea of typical responses of the lamp that you could then use to make objective decisions about what are likely instrument response changes and which are due to the lamp.</p>
<p>As to the time interval, I'm still of the opinion that you'd want something sort that represents the behaviour of the instrument close to the measurement you're interested in (having separated out the lamp, as above)</p>
<p>Yours,<br />
Andy</p>
]]></description>
					                    <pubDate>Fri, 19 Dec 2014 14:14:50 +0000</pubDate>
                </item>
				                <item>
                    <title>tomikarp on Standard Lamp -test</title>
                    <link>http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p60</link>
                    <category>WG2 - Algorithms</category>
                    <guid isPermaLink="true">http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p60</guid>
					                        <description><![CDATA[<p>Hello Andrew,</p>
<p>only thing I am afraid of is that the R6 does not really follow the<br />
changes in the ETC (meaning the changes in the response) but if you for<br />
example have five sl-test during a day they are on average quite close<br />
to the truth. If the +-10 units is not due to changes in the instruments<br />
response but due to lamp spectrum fluctuation the we should most<br />
definately use something that has smoother features.</p>
<p>How to detect steps is then another story and how to determine if its a<br />
change in the response or in the lamp is then maybe even harder. I think<br />
at least it calls for a human eye to decide when the reference value<br />
should be changed. And this should be done after the next calibration<br />
because if the lamp reference has changed but not the ETC  then we<br />
should apply new calibration constants from the point of change onwards.</p>
<p>best regards Tomi</p>
]]></description>
					                    <pubDate>Fri, 19 Dec 2014 14:11:19 +0000</pubDate>
                </item>
				                <item>
                    <title>tomikarp on Standard Lamp -test</title>
                    <link>http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p59</link>
                    <category>WG2 - Algorithms</category>
                    <guid isPermaLink="true">http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p59</guid>
					                        <description><![CDATA[<p>Some helpful views from Andew:</p>
<p>Hi Tomi,</p>
<p>Looking at your five options for managing the R6 values, the one that makes the most sense to me is a daily median. I think it is very hard to manage the steps and jumps that you see in the time series with any other option with a longer time scale. For example, how would you decide on a criterion for a step and when not to apply the time averaging? Besides do you not want the R6 that is most representative of the instrument behaviour on the day in question -- that you only get by using a daily mean of some sort. Also I think that the median has the benefit of rejecting outliers nicely, whereas the mean is influenced by these.</p>
<p>Yours,<br />
Andy</p>
]]></description>
					                    <pubDate>Fri, 19 Dec 2014 14:10:30 +0000</pubDate>
                </item>
				                <item>
                    <title>tomikarp on Standard Lamp -test</title>
                    <link>http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p58</link>
                    <category>WG2 - Algorithms</category>
                    <guid isPermaLink="true">http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p58</guid>
					                        <description><![CDATA[<p>Hello all,</p>
<p>thank you very much on the input for my survey and sorry for so late response.</p>
<p>All the replies I got will be on the Eubrewnet.org forum with this message. (please inform me if you don't want to have your response in the forum)</p>
<p>Here is a short overview on the replies and my thoughts on those.</p>
<p>We have to decide how to manage two different things the measurements and the reference value. First thing about the measurements is that they are kind of noisy. I look at the standard lamp history of Brewer 037 and the variability seems to be around 10 to 15 units, so some kind of smoothing/mean value could be good as representative of the measurements.</p>
<p>The used/suggested methods to manage the measured R6 values were:</p>
<p>-daily median<br />
-daily mean<br />
-10 days running mean with weighting (O3Brewer)<br />
-21 days running mean with weighting (found it on some WOUDC related website by Googling :D)<br />
-fitting a polynomial to the measurements</p>
<p>The time window of any of these methods should be investigated?</p>
<p>Also there is a problem of outliers every now and then. When I calculated the running mean for a figure that will be attached I took out all the values that were more than 15 units off the median value for the time window (10 or 21 days).</p>
<p>On the other hand we should also manage the reference value. If all the changes are changes in the instrument and not in the lamp the reference value can (and should) be kept the same as also the ETC. This way the SL-corrected ETC will be ok. On the other hand if we see in the next calibration that the changes in the R6 value are not the same than the change in the ETC, we should figure out what to use as the reference. Of course if there is a step or a jump a new reference value should be introduced. How about slow change?</p>
<p>Many of the methods in your answers had an interpolation of the reference value. Should this be used for each instrument?</p>
<p>Attached are some figures with sl time series of Brewer 037. The firs figure shows the whole time series. There is some strange behaviour in the beginning and the stability is reached in the early 2000's <img src="https://eubrewnet.aemet.es/cost1207/wp-includes/images/smilies/icon_biggrin.gif" alt=":D" class="spWPSmiley" style="max-height:1em;margin:0"  /> </p>
<p>There is also an example of these different smoothing methods for the year 2012.</p>
<p>Feel free to comment on the figures or my thoughts.</p>
<p>best regards and Merry Christmas<br />
Tomi</p>
<p>figures:  <a href="https://www.dropbox.com/sh/at4etrl3efk2crm/AABI9UJUuBXaE1P9Txs3EphVa?dl=0" rel="nofollow" target="_blank"><a href="https://www.dropbox.com/sh/at4" rel="nofollow">https://www.dropbox.com/sh/at4</a>.....EphVa?dl=0</a></p>
]]></description>
					                    <pubDate>Thu, 18 Dec 2014 12:06:16 +0000</pubDate>
                </item>
				                <item>
                    <title>tomikarp on Standard Lamp -test</title>
                    <link>http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p57</link>
                    <category>WG2 - Algorithms</category>
                    <guid isPermaLink="true">http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p57</guid>
					                        <description><![CDATA[<p>Dear Tomi,</p>
<p>standard lamp correction on the Brewer MKIV 097 in Poprad Ganovce has been<br />
performed by following procedures:<br />
1. SL tests have been run at least 3 times per day<br />
2. Interdiurnal SL corrections of total ozone have been done using software<br />
of Martin Stanek (O3Brewer v. 5.0 - SL test, O3 correction and<br />
recalculation). There is apparent  correlation between the instrument<br />
temperature (an also ambient temperature)  and the SL correction (R5,R6<br />
ratios).<br />
3. The SL corrections after the instrument calibration have been done in<br />
accordance with recommendations set in the instrument calibration report. If<br />
the ozone data recalculation is recommended, the correction is done using<br />
linear interpolation in accordance with the SL history.<br />
4. Correction of the ozone has not been applied on data measured before<br />
sudden changes in the instrument like  the SL exchange or by other serious<br />
technical problems (e.g.exchange of some instruments part).</p>
<p>Anna and Oliver</p>
]]></description>
					                    <pubDate>Thu, 18 Dec 2014 12:05:44 +0000</pubDate>
                </item>
				                <item>
                    <title>tomikarp on Standard Lamp -test</title>
                    <link>http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p56</link>
                    <category>WG2 - Algorithms</category>
                    <guid isPermaLink="true">http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p56</guid>
					                        <description><![CDATA[<p>Hello all,</p>
<p>we have already seen several replies with good practices for SL<br />
corrections. Here is my 2 cents.</p>
<p>1. Fitting a quadratic polynomial (vs. time) to a set of SL ratios may<br />
help coping with trends and curvatures in the changes.<br />
2. Regardless of what type of fit is used, the difference between the<br />
fitted and un-fitted data is a good metric for  how well the fit<br />
describes the changes, especially if those are fast and/or non-linear.<br />
This can also reveal steps in the record.<br />
3. Routinely calculating correlation between R6 and temperature may help<br />
preventing running with non-optimal temperature coefficients. This can<br />
be done together with fitting against time to separate the two effects.<br />
4. While not strictly in the theme of this topic, we need to make sure<br />
enough SL tests are done every day to be representative of the state of<br />
the instrument at different temperatures throughout the day.<br />
5. Whatever function is used, comparing the extrapolated data into today<br />
with the actual data from today is an important measure of quality of<br />
the model.</p>
<p>Cheers,</p>
<p>Volodya</p>
]]></description>
					                    <pubDate>Thu, 18 Dec 2014 12:05:17 +0000</pubDate>
                </item>
				                <item>
                    <title>tomikarp on Standard Lamp -test</title>
                    <link>http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p55</link>
                    <category>WG2 - Algorithms</category>
                    <guid isPermaLink="true">http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p55</guid>
					                        <description><![CDATA[<p>Hi Tomi</p>
<p>I asked Hugo to send me some information on how we apply the standard lamp correction at RMIB.<br />
This is the answer I got:</p>
<p>" RMI has developed the following procedure to take into account the results of the SL tests:<br />
- First a visualisation tool is used to see the evolution of the R6 (R5) readings.<br />
It shows (monthly) means with the standard deviation. If there is a gradual change<br />
then at a certain point in time a monthly mean is used to interpolate the ETC linearly<br />
from the previous point according to the change in SL reading. This is done with an<br />
off line program that reprocesses all the data. The distances in time between these<br />
points are to be chosen in such a way that the linear aproximation is adequate.<br />
If a sudden jump is detected then the corresponding new ETC is also applied as a jump.<br />
- Before applying such changes the ensemble of information has to be checked (i.e. comparison<br />
with co-located instrument, instrument in the vicinity, satellite data) in order to be sure that it is a<br />
real change in the instrument. This has as a consequence that the corrections with respect to the<br />
SL tests can only be done a posteriori."</p>
<p>Best regards<br />
Veerle</p>
]]></description>
					                    <pubDate>Thu, 18 Dec 2014 12:04:58 +0000</pubDate>
                </item>
				                <item>
                    <title>tomikarp on Standard Lamp -test</title>
                    <link>http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p54</link>
                    <category>WG2 - Algorithms</category>
                    <guid isPermaLink="true">http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p54</guid>
					                        <description><![CDATA[<p>Hi Tomi,</p>
<p>Our method is close to Alberto's and Diamantino's. We correct using a daily mean SL ratio where the reference is either the last calibration SL ratio (from Volodya's report), or for reprocessing after a calibration, we use an interpolated reference between two calibrations. We take account of any known steps. The reference values and steps are stored in a text file that is read by a lower level routine so that whenever a new value is requested the most recent references are used.</p>
<p>Yours,<br />
Andy</p>
]]></description>
					                    <pubDate>Thu, 18 Dec 2014 12:04:08 +0000</pubDate>
                </item>
				                <item>
                    <title>tomikarp on Standard Lamp -test</title>
                    <link>http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p53</link>
                    <category>WG2 - Algorithms</category>
                    <guid isPermaLink="true">http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p53</guid>
					                        <description><![CDATA[<p>Hi Tomi,</p>
<p>We use the last SL test results for the corrections as well as the calibration references between 2 intercomparison dates.</p>
<p>The formulae we use are:</p>
<p>etco3 = b1cal(ical1) + (r6(jtest) - r6cal(ical1)) + (r6cal(ical1) - r6cal(ical2) - b1cal(ical1) + b1cal(ical2)) * dt / tt</p>
<p>and then:</p>
<p> o3 = (ms9 - etco3) / (a1 * mu)</p>
<p>were:</p>
<p>b1cal(ical1): etco3 reference at calibration date ical1<br />
b1cal(ical2): etco3 reference at calibration date ical2<br />
r6cal(ical1): SL R6 ratio reference at calibration date ical1<br />
r6cal(ical2): SL R6 ratio reference at calibration date ical2<br />
r6(jtest): Last SL R6 ratio at date jtest<br />
dt: Time diference (days) between de last SL test and the calibration date ical1<br />
tt: Time diference (days) between the calibration dates ical1 and ical2 (ical2 &#62; ical1).</p>
<p>o3, ms9, a1 and mu have the usual meanings.</p>
<p>Regards,<br />
Tino.</p>
]]></description>
					                    <pubDate>Thu, 18 Dec 2014 12:03:36 +0000</pubDate>
                </item>
				                <item>
                    <title>tomikarp on Standard Lamp -test</title>
                    <link>http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p52</link>
                    <category>WG2 - Algorithms</category>
                    <guid isPermaLink="true">http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p52</guid>
					                        <description><![CDATA[<p>Hello</p>
<p>This is what we currently use:</p>
<p>1- SL calculation, we use the daily median to avoid outliers  without any smooth. So the correction is done  in daily intervals.<br />
2- If during the day no SL measurement are done we use the day before  or last recorded value<br />
3- SL reference value are from config file (O3brewer),  it comes from calibration report.<br />
    (as suggested by Volodya, if this not available a mean of 10 days after ICF data can be used as BDMS)<br />
4- Sl changes of less than +/-  5 units are not used for the correction,  is our estimation of the noise but can be calculated as the 1.5 std of the daily mean.<br />
5- We record and store the correction applied (SL_ref - SL_obs) </p>
<p>Regards<br />
Alberto</p>
]]></description>
					                    <pubDate>Thu, 18 Dec 2014 12:03:13 +0000</pubDate>
                </item>
				                <item>
                    <title>tomikarp on Standard Lamp -test</title>
                    <link>http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p51</link>
                    <category>WG2 - Algorithms</category>
                    <guid isPermaLink="true">http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p51</guid>
					                        <description><![CDATA[<p>Dear Tomi,</p>
<p>my few and messy ideas:</p>
<p>1. I take for granted that we already have all necessary information to<br />
discriminate between changes in the Brewer vs changes in the lamp only<br />
(e.g., lamp replacements). I suspect that only the user of each Brewer<br />
knows that (and maybe logged it in his logbook). Any algorithm should<br />
easily let the user to introduce discontinuities.</p>
<p>The Brewer Processing Software (BPS) from EC, for example, expects a<br />
discontinuity whenever a new configuration file comes up and, if I'm<br />
right, scales the sl results to assure continuity between the series;</p>
<p>2. points which are too far from the bulk of the series should not be<br />
used (or at least investigated) in my opinion, since they could arise<br />
from instrumental issues that don't necessarily affect the ozone<br />
retrieval in the same way as the sl. Can we get to an agreement on this,<br />
too?</p>
<p>3. misalignments in filter wheels (e.g., FW#3) or other problems could<br />
originate strange behaviours in sl series, such as broad or double<br />
strips of points. How to cope with them? We just not use these series?</p>
<p>4. an extensive study (STSM ?) of several sl series from many Brewers<br />
could be very useful to identify common issues and typical time scales<br />
of variation (can high-frequency noise and the "real" signal be<br />
completely separated?). If we haven't enough time to go into detail, we<br />
should rely on the experience of our experts.. It would be good to have<br />
an overview of the most common issues affecting sl measurements<br />
(Volodya?). Also, I think that the RBCC-E campaigns can provide an<br />
indication on how many Brewers are actually performing well and why<br />
others aren't (Alberto?).</p>
<p>I personally use the BPS by Vitali. It's open source, so everybody can<br />
look at its code for further information.</p>
<p>Cheers,</p>
<p>Henri</p>
]]></description>
					                    <pubDate>Thu, 18 Dec 2014 12:02:39 +0000</pubDate>
                </item>
				                <item>
                    <title>tomikarp on Standard Lamp -test</title>
                    <link>http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p50</link>
                    <category>WG2 - Algorithms</category>
                    <guid isPermaLink="true">http://eubrewnet.aemet.es/cost1207/forum/wg2/standard-lamp-test/#p50</guid>
					                        <description><![CDATA[<p>Dear fellow arithmophiles,</p>
<p>we are in a search for the best possible way to include standard lamp correction to our Brewer data. Please feel free to share your way of using the standard lamp measurements to correct your total ozone measurements. If you do it by e-mail please use "reply all" not to cover the same method multiple times. I will set up this conversation to the forums at eubrewnet.org on Monday. I will also move all the discussion from e-mail thread there.</p>
<p>Please share your method in clear human language (preferably english) but any additional information in forms of code and/or mathematical entities is very welcome.</p>
<p>Feel free to share this message to anyone who might be interested and for some reason don't seem to be on this mailing list and/or in Working Group 2.</p>
<p>best regards and a nice weekend<br />
Tomi</p>
<p>P.S. When you think of the best possible solution from those suggested:</p>
<p>I think the timespan of any solution using average should be such that it is a realistic time for the response of the instrument to change. I am not sure if the change should be allowed to be step or smooth. But for example let us not use only the last value because there is some noise in SL-test and that noise would be just moved to the measurements of Ozone in my mind. Also the average of too long (but how long is that) might miss some characteristics of response change in the instrument.</p>
]]></description>
					                    <pubDate>Thu, 18 Dec 2014 12:01:33 +0000</pubDate>
                </item>
				                <item>
                    <title>redondas on Hot topics in Brewer ozone rertrieval algorithms</title>
                    <link>http://eubrewnet.aemet.es/cost1207/forum/wg2/hot-topics-in-brewer-ozone-rertrieval-algorithms/#p11</link>
                    <category>WG2 - Algorithms</category>
                    <guid isPermaLink="true">http://eubrewnet.aemet.es/cost1207/forum/wg2/hot-topics-in-brewer-ozone-rertrieval-algorithms/#p11</guid>
					                        <description><![CDATA[<p>Dear all</p>
<p>New algorithm with 5 wavelengths , new weighting coefficients,  cross section, and the stray light  are nice  research topics but difficult to implement on the database almost at these early stage.</p>
<p>For the moment i think  we have to start implementing the standard algorithm and include all the different processing  strategies.  In the previous work by Tapani and I  compare different processing software for ozone , EC BDMS, O3brewer and RBCC- (<a href="ftp://ftp.tor.ec.gc.ca/Workshops/Aosta_2009/Presentations/2009-09-25_Friday/Koskela-DifferencesProcessing.pdf" target="_blank"><a href="ftp://ftp.tor.ec.gc.ca/Worksho">ftp://ftp.tor.ec.gc.ca/Worksho</a>.....essing.pdf</a>)</p>
<p>The key point is how to apply the Standard  lamp correction and how to determine the reference value (is automatic on the case of BDMS). So i suggest to start for this point. </p>
<p>This other topics/improvements to the standard algorithm are more easy to introduce/analyze on the database:</p>
<p>-  Ozone Air Mass calculation from climatology </p>
<p>- Rayleigh  coefficients from calibration (and not fixed for every brewer)</p>
<p>- Ozone effective temperature from climatology</p>
<p>- Neutral Density detection/correction</p>
<p>The database will store raw files , testing new algorithms  and new methodology will be easy, just think what you need to be stored and also think you have to apply on 50 brewers.  So is better to to keep the things as simple as we can.</p>
<p>Alberto</p>
]]></description>
					                    <pubDate>Thu, 22 May 2014 23:07:47 +0100</pubDate>
                </item>
				                <item>
                    <title>redondas on Hot topics in Brewer ozone rertrieval algorithms</title>
                    <link>http://eubrewnet.aemet.es/cost1207/forum/wg2/hot-topics-in-brewer-ozone-rertrieval-algorithms/#p10</link>
                    <category>WG2 - Algorithms</category>
                    <guid isPermaLink="true">http://eubrewnet.aemet.es/cost1207/forum/wg2/hot-topics-in-brewer-ozone-rertrieval-algorithms/#p10</guid>
					                        <description><![CDATA[<p>IBERONESIA 3.0 Road Map</p>
<p>Hello we are now working to develop IBERONESIA 3.0 based on the experience of IBERONESIA 2.0<br />
Bento recently travel to NOAA to study the newbrew system, a comparison of the two system can be found in his STSM report.</p>
<p>This is a summary of the database<br />
  *RAW DATA DATABASE:  It will store all files produced by the brewer, mainly the B file and characterization files (RAW: Data who will not change on the future)<br />
  *CALIBRATION DATABASE  : It will store calibration information this include the current contents of ICF, UVR and values of calibration sheet from IOS or RBCC-E.<br />
  *PROCESS CONFIGURATION DATABASE: it will store all the parameters needed to process the ozone from the raw files +  Calibration  . As starting point  for ozone we use the configuration file from Martin’s Stanek O3brewer  (airmas range , sigma level ,etc ),<br />
  *PRODUCT DATABASE : Different levels of ozone, ultraviolet and AOD</p>
<p>ROAD MAP</p>
<p>September:  Administrator will set up the brewer and the stations can sent the RAW files.<br />
    We will focus to test/solve the communications problems.<br />
October  :    The interface to sent the calibration files and processing files for each station will be ready<br />
We will focus to test/solve the configuration/processing issues<br />
November:  Standard ozone product  ready<br />
 2015  Quality checks implemented<br />
   Level 1 (Database)  file parsing<br />
   Level 2 (Operational) focus, n of hp/hg  ….<br />
   Level 3  (Operative)    Neighbour/ Satellite comparison, Climatological checks</p>
<p>What do we need<br />
- Betatester for sending data please contact us<br />
- Define Calibration /Characterization database  (starting point ICF + Calibration Sheets  )<br />
- Define   Processing /Algorithm Database  (Starting Point: Standard Algorithm  )<br />
- Provide the pseudocode of the algorithms if they are different for the standard.</p>
]]></description>
					                    <pubDate>Thu, 22 May 2014 15:40:31 +0100</pubDate>
                </item>
				    </channel>
	</rss>