CAN-EYE User Manual PDF
CAN-EYE User Manual PDF
CONTENT
CAN-EYE V6.313 USER MANUAL ...................................................................................... 1
1. INTRODUCTION .............................................................................................................. 5
1.1. CAN-EYE specific features......................................................................................... 5
1.2. CAN-EYE basic principles .......................................................................................... 6
1.3. Hardware and software requirements .......................................................................... 9
1.4. Copyright and limitations of responsibility ................................................................. 9
1.5. Upgrading the version ................................................................................................. 9
1.6. Bug report .................................................................................................................... 9
2. INSTALLATION ............................................................................................................... 9
3. USING CAN-EYE STEP BY STEP ................................................................................ 10
3.1. Choosing the image type to be processed .................................................................. 10
Hemispherical images ...................................................................................................... 12
Images acquired with a camera inclined at 57.5 ............................................................. 13
Images acquired with a camera at the vertical position (nadir)........................................ 14
3.2. Defining the processing parameters .......................................................................... 15
Processing parameters for DHP ....................................................................................... 15
Processing parameters for images at 57.5....................................................................... 18
Processing parameters for images acquired at nadir ........................................................ 20
3.3. SELECTING PERTINENT IMAGES ...................................................................... 21
3.4. MASKING IMAGES ................................................................................................ 22
GAMMA: Gamma Correction ......................................................................................... 23
SELECT: Selection of one image where a mask must be defined ................................... 23
APPLY ALL: Apply last mask to all images ................................................................... 24
UNDO: Undo last mask ................................................................................................... 24
RESET: Reset all the masks ............................................................................................. 24
DONE: End the masking .................................................................................................. 24
SLOPE: Define slope ....................................................................................................... 25
3.5. CLASS DEFINITION ............................................................................................... 25
No mixed pixels (2 classes) .............................................................................................. 25
2 Classes + Mixed Pixels ................................................................................................. 25
3.6. Classifying the images ............................................................................................... 26
4. CAN-EYE OUTPUT DESCRIPTION............................................................................. 27
4.1. Definitions and theoretical background ..................................................................... 28
Introduction ...................................................................................................................... 28
Modeling the Gap Fraction .............................................................................................. 28
Modeling the leaf inclination distribution function g l , l , l ........................................... 29
Estimating leaf area index and leaf inclination from gap fraction measurements ........... 30
Cover fraction computation .............................................................................................. 33
FAPAR computation ........................................................................................................ 33
4.2. Description of CAN-EYE output directory content .................................................. 34
Hemispherical images ...................................................................................................... 34
Images acquired at 57 ..................................................................................................... 35
Images acquired at Nadir.................................................................................................. 36
5. SUMMARY MODULE................................................................................................ 36
Hemispherical images ...................................................................................................... 37
Images acquired at 57.5 .................................................................................................. 37
Images acquired at Nadir.................................................................................................. 37
6. CALIBRATION MODULE (DHP only) ..................................................................... 38
6.1. System Definition ...................................................................................................... 39
6.2. Optical centre characterization .................................................................................. 39
6.3. Projection function characterization .......................................................................... 42
7. REFERENCES ................................................................................................................. 47
List of Figures
Figure 1. Overview of the CAN-EYE processing of a series of images .................................... 8
Figure 2. Example of the Maize directory containing a series of 9 images (.JPG)
corresponding to one ESU and to be processed concurrently .................................................. 12
Figure 3. CAN-EYE Hemispherical Images menu ............................................................... 13
Figure 4. CAN-EYE Images at nadir (0) menu ................................................................... 14
Figure 5. CAN-EYE Images at nadir (0) menu ................................................................... 14
Figure 6. Processing parameter window for hemispherical images ......................................... 17
Figure 7. Creating projection function and optical centre characteristics ................................ 17
Figure 8. Processing parameter window for images acquired at 57.5 .................................... 18
Figure 9. Processing parameter window for images acquired at nadir (0) ............................. 20
Figure 10. Selecting of pertinent images .................................................................................. 21
Figure 11. Main masking window............................................................................................ 22
Figure 12. Gamma correction value (masking process) ........................................................... 23
Figure 13. Secondary masking window (after having selected the image to be masked)........ 23
Figure 14. Applying a mask to all images ................................................................................ 24
Figure 15. Classification window............................................................................................. 26
Figure 16. CAN-EYE Images at nadir (0) menu ................................................................. 37
Figure 17. CAN-EYE calibration menu ................................................................................... 38
Figure 18. Example of the Start sheet of the calibration excel file...................................... 38
Figure 19. Image coordinate system. ....................................................................................... 39
Figure 20. Illustration of the holes drilled in the fish-eye cap. The red arrow indicates the
rotation of the cap. .................................................................................................................... 40
Figure 21. A series of images taken for several positions of the fish-eye cap. In this case, three
holes were considered. ............................................................................................................. 40
Figure 22 : Example of an excel file sheet to be filled to determine the optical centre of a
system. ...................................................................................................................................... 41
Figure 23. Example of a CAN-EYE output , showing the fitting of the circles to the holes
positions in the case of three holes. The actual optical centre is shown by the red cross. ....... 42
Figure 24. Example of a projection function sheet of the calibration excel file. ..................... 43
Figure 25. Experimental design scheme. ................................................................................. 44
Figure 26. Example of an image of the experimental design taken with the hemispherical
camera and used for the calibration of the projection function. The horizontal doted yellow
line corresponds to the diameter of the image passing through the optical centre (defined by
its coordinates as measured previously). The camera is aligned thanks to the front nail and
background line. ....................................................................................................................... 45
Figure 27. Example of projection function characterization with CAN-EYE ......................... 46
1. INTRODUCTION
CANEYEV6.1isafreesoftwaredevelopedattheEMMAHlaboratory(Mediterranean
environment and agrohydro system modelisation) in the French National Institute of
Agricultural Research (INRA).The authors remind that this software is a didactic product
madeonlyforpedagogicusesItisprotectedinFrancebyIntellectualPropertyRegulations
andabroadbyinternationalagreementsonthecopyright.
It can be downloaded at [Link] For any information, question
orbugreport,pleasecontactcan_eye@[Link]
1.1. CAN-EYE specific features
CANEYE has a set of specific features that improves its efficiency, accuracy, flexibility and
traceability:
Efficiency:aseriesofimagesistypicallyprocessedwithin2to20minutes,depending
onthecomplexityoftheimages,theexperienceoftheuserandtheperformancesof
theusedcomputer.
Accuracy: the image conversion into a binarized image (green vegetation/other is
[Link]
toseparategreenelementsfromthesjyorthesoilandallowsacquiringimagesfrom
bothaboveorbelowthecanopy.
Flexibility:CANEYEallowscalibratingtheimagingsystem,aswellasdefiningthearea
[Link]
[Link].A
specifictoolisalsoimplementedtotakenintoaccounttheslopeoftheterrain
Portability:CANEYEisveryeasytoinstall
Traceability:[Link]
these purposes, an html report and output documents (excel format) as well as
intermediateresultsareautomaticallygeneratedbythesoftware.
1.2. CAN-EYE basic principles
Figure 1providesanoverviewifthedifferentstepsrequiredtoprocessseriesofimageswith
CANEYE,whicharedescribedhereafter:
Setupoftheprocessing
Theuserfirstselectsthetypeofimageshehasacquired(DHP,imagesat57,Images
atnadir),iftheyareinRGBcoloursorbinarisedandacquiredupward(lookingatthesky)or
downward (looking at the soil). After selecting the directory where the images to be
processed are stored, the user has to define the characteristics of the processing. Default
values are proposed for all the items. This setup configuration can be saved and used to
processanotherseriesofphotos(Figure 1,step1).
Forhemisphericalimages,acalibrationmethodisproposedtocharacterizemostofthefish
eyelens+camerasystem.
Whenselectingbinarisedimages,thefollowingstepsarenotrequiredandCANEYEcanbe
usedinbatchprocessing(byselectingthedirectorythatcontainsalltheseriesofimagesto
beprocessed)toautomaticallygeneratetheoutputs.
Preprocessingtheimages
The images are then loaded and displayed in a window. It is possible to select
[Link],itispossibletomaskpartsofthe
[Link],thegammafactorcanbechangedtobrightenor
darken the images and provide a better visual discrimination between the vegetation
elementsandthebackground(Figure 1,step2).
Classification(incaseofRGBimages)
Whenthepreprocessingstepsends,thenumberofcoloursisreducedto324which
aresufficienttogetgooddiscriminationcapacitieswhilekeepingsmallenoughtobeeasily
manipulated. The classification is then the most critical phase that needs to be interactive
becausethecoloursassociatedtoeachclassdependontheilluminationconditionsandon
[Link](es)arefirsttobedefined(Figure 1,step3).Itisgenerally
moreefficienttoselectasingleclassthatcorrespondstothathavingsimplecolours(suchas
thesky)orthelessrepresented(suchasthegreenvegetationforsparsecanopies,orthesoil
background for the more dense canopies). In this case the non classified pixels will be
considered belonging to the other class (sky or soil if the vegetation was first selected;
vegetationiftheskyorsoilwasfirstselected).[Link]
case,allthepixelsthatarenotallocatedtooneortheotherclassareconsideredmixedand
processedlaterassuch.
At the beginning of this classification process, the user can use different indices to
roughly classify the images by thresholding methods (Figure 1, step 4) and can then
interactivelyrefinehisclassification(Figure 1,step5).Oncetheallocationofthecoloursto
thedefinedclassesiscompleted,theimagesaretransformedintobinarisedimages.
Generationoftheoutputs
[Link]
arebinarisedusingtheclassificationresults(Figure 1,step6).Aseriesoffilesareproduced
[Link](Figure
1, step 7) include a report file (html format) where all the elements of the process are
[Link](EXCELorASCIIformat)arealsocreatedwheregapfractions,
computed LAI, ALA, fAPAR and FCOVER are created, depending on the type of processed
images. Optionally, intermediate results (matlab file format) can also be stored, allowing
CAN_EYEtoruninbatchmodetoperformneworadditionalprocessing.
NotethatthereisalsoapossibilitytosummarizetheresultsfromseriesofCANEYEresults
inanexcelfile.
2. INSTALLATION
TheCAN_EYEinstallationisquiteeasy:
Check the system type: 32 bits/64 bits: Select the My Computer icon on your
[Link]
type.
Downloadthecorrespondingmatlabcomponentruntime(versionR2012a,32bitsor
64 bits) from the Mathworks web site
([Link]
Install the Matlab Component Runtime (MCR) by clicking on
MCR_R2012a_win32_installer.exe or MCR_R2012a_win64_installer.exe depending
on your system type and follow the instructions. It is required that you install the
MCRwiththeAdministratorrights.
AddthepathtotheMCRinyourenvironmentpath:
o Either by opening a command prompt: click on Start Menu, then execute,
type cmd. When the DOS windows opens, type: set PATH=C:\Program
Files\MATLAB\MATLAB Component Runtime\v717;%PATH%
o Or,selecttheMyComputericononyourdesktop,Rightclicktheiconand
select Properties from the menu, Select the Advanced tab, Click on
Environment Variables. Your environment variables are listed. Add
C:\Program Files\MATLAB\MATLAB Component Runtime\v717 in the
pathvariable
CreateadirectorythatwillcontaintheCAN_EYEexecutablefileandtheassociated
directories(/Help,/Data,/Param_V6).
CopyCAN_EYE_V6313_yyyy_mm_dd_bits.[Link]
CAN_EYE_V6313_yyyy_mm_dd_bits.exe. It will install all the CANEYE files and
associateddirectories.
YoushouldbenowreadytolaunchCAN_EYEbyclickingonCAN_EYE_VXXX.exe.
Series of images (either jpg or tiff format) to be processed at once must be stored in a
same directory (Figure 2). These images are assumed to correspond to the same ESU
(ElementarySamplingUnit)[Link],
[Link],thesamesizeand
NotethatconsideringtheassumptionsmadeinCANEYEPoissonmodel,itisnotcorrectto
estimatetheLAIfromthegapfractionevaluatedonasingleimage.Aminimumof8images
isrequired(Weissetal.,2003).No morethan20imagescanbeprocessedbyCAN_EYEat
[Link],ifmorethat20imagesareavailableforasamecanopy,theusermust
organize them into 2 or more directories. It is therefore not possible with CANEYE to
determinetheLAIofasingletreeorplant.
[Link]
large differences in illumination conditions (such as strong direct light or strong diffuse
conditions),itisrecommendedtosplittheseriesofimagesintohomogeneoussubseries.
The same applies obviously for photos taken over the same canopy but either looking
upwardordownward:inthedirectory,onlyimagestakenforagivendirection(upordown)
shouldbepresent.
CAN_EYEacceptsonlyTIFF(.tif)andJPEG(.jpg)imagesformatorbinaryformatforalready
classified images (see Erreur! Source du renvoi introuvable., Erreur! Source du renvoi
introuvable., Erreur! Source du renvoi introuvable.). The images can be of any size
(resolution).However,alltheimagestobeprocessedconcurrentlyinparallelandstoredina
singledirectoryshouldhavethesameformat,size,camerasetup(zoom,),aswellasthe
same direction (up or down). If this is not the case, create as many directories as
combinationsofformat,size(aswellascamerasetupanddirection).
Theimagenameisnotimportant([Link]).Any
namesarethereforeacceptedandcanbetrackedlaterthroughtheprocessingandreport
files.
Hemispherical images
[Link]
acamera+[Link],ALA,FAPARandFCOVER.
Itispossibletodirectlyprocess:
RGBimages(jpegortiff)acquiredwiththesystem:
o upward:cameraonthegroundlookingatthesky
o downward:cameraabovethecanopylookingatthesoil
Results will be stored in a subdirectory created in the image directory and called
CE_P180_imagedirectory
AlreadyClassifiedImages(Binary),i.e,[Link]
imagetypecanbetakenintoaccount:
o Classified images from which theuseful parthas beenalready extracted(i.e
thefisheyelenscalibrationhavebeentakenintoaccount:FOV,COI,.)such
asCANEYEprovidesautomaticallyduringaprocessing:intermediateresults
([Link])arestoredinazipfilecalledCNE_DirectoryNamethat
includes:(i)Aheaderfile(ASCIIfile),CNE_DirectoryName.hdrwithtoolines.
[Link]
widthofthebinarisedimages.(ii)[Link]
in unsigned integer, 8 bits (uint8). Gap Fraction is between 0 and 100
(0=vegetation, 100=gap, between=mixed pixels), Invalid values
(correspondingtomaskedareas)=255.
o Classified images from which the useful part is not already extracted (the
original image size is kept) and are issued from an external processing: this
allowsderivingCANEYEvariablesfromimagesclassifiedwithothertoolsthan
CANEYE.ThefilesmustbestoredinazipfilecalledCIE_DirectoryNamethat
[Link],8bits
(uint8). Gap Fraction is between 0 and 100 (0=vegetation, 100=gap,
between=mixedpixels),Invalidvalues(correspondingtomaskedareas)=255.
For already classified images, the user can either choose a directory that contains
CIE_name.zip or CNE_name.zip, or a directory that contains several directories,
themselves containing one or several CIE_name.zip (kind of batch process). For each
CIE_name.zip,resultswillbestoredinasubdirectorycalledCE_P180_CIE_name.
Select Images at 57. This Menu allows the processing of images acquired with a camera
inclined at 57.5 from the vertical. For this particular direction the gap fraction is
independentonleafinclinationangle(Weissetal.,2003).ThisallowsthederivationofLAI
only.
Itispossibletodirectlyprocess:
RGBimages(jpegortiff)acquiredwiththesystem:
o upward:cameraonthegroundlookingatthesky
o downward:cameraabovethecanopylookingatthesoil
AlreadyClassifiedImages(Binary),i.e,[Link]
Imagetypecanbetakenintoaccount:
o Classified images from which theuseful parthas beenalready extracted(i.e
the lens calibration was into account: FOV, focal length) such as CANEYE
providesautomaticallyduringaprocessing:intermediateresults([Link]
images)arestoredinazipfilecalledCNE_DirectoryNamethatincludes:(i)A
header file (ASCII file), CNE_DirectoryName.hdr with too lines. First line
providestheheightofthebinarisedimages;secondlineprovidesthewidthof
the binarised images. (ii) Binary files named [Link] coded in
unsigned integer, 8 bits (uint8). Gap Fraction is between 0 and 100
(0=vegetation, 100=gap, between=mixed pixels), Invalid values
(correspondingtomaskedareas)=255.
o Classified images from which the useful part is not already extracted (the
original image size is kept) and are issued from an external processing: this
allowsderivingCANEYEvariablesfromimagesclassifiedwithothertoolsthan
CANEYE.ThefilesmustbestoredinazipfilecalledCIE_DirectoryNamethat
[Link],8bits
(uint8). Gap Fraction is between 0 and 100 (0=vegetation, 100=gap,
between=mixedpixels),Invalidvalues(correspondingtomaskedareas)=255.
ResultswillbestoredinasubdirectorycalledCE_P57_Directoryname.Foralreadyclassified
images, the user can either choose a directory that contains CIE_name.zip or
CNE_name.zip,oradirectorythatcontainsseveraldirectories,themselvescontainingone
or several CIE_name.zip (kind of batch process). For each CIE_name.zip, results will be
storedinasubdirectorycalledCE_P57_CIE_name.
Select Images at nadir (0). This Menu allows the processing of images acquired with a
[Link]. Thismenu
allows the processing of image acquired with a camera (no fisheye lens) at the ground
[Link]:
RGBimages(jpegortiff)acquiredwiththesystem:
o upward:cameraonthegroundlookingatthesky
o downward:cameraabovethecanopylookingatthesoil
Once the user has chosen one of these two options, he is asked to choose a processing
directory that contains a series of images (either jpg or tiff format) that he wants to
process at the same time. Then the user is asked to provide the CANEYE processing
parameters.ResultsarestoredinasubdirectorycalledCE_NADIR_Directoryname.
.
Oncethedirectorycontainingtheimagestobeprocessedisselected,theusermustprovide
the processing parameters (including optics characterization and CANEYE specific
parameters) that differ with the image type. Information must be manually entered in a
specific window (described hereafter). The processing parameters are then stored in the
Param_V6subdirectory(createdwhereCan_Eye.exefileislocated),withadefaultproposed
name. For hemispherical images, some precomputation are also performed and save in
ordertosavetimewhenprocessingnewdirectories.
This window allows defining all the camera+fisheye lens characteristics (optical centre,
projection function), as well as the characteristics required for the processing (angular
resolution, FAPAR computation). This allows making some precomputation that are saved
tospeeduptheprocessingwhenusingthesystemforanothersetofimagesacquiredinthe
sameconditions.
NotethatCANEYEsupportsonlytwofisheyeprojectiontypes:
Polarprojectionfunction:theangulardistances(indegrees)intheobjectregionare
proportionaltoradialdistancesinpixelsontheimageplane.
Projection function assuming that the angular distances (in degrees) in the object
regionarerelatedtoradialdistancesinpixelsontheimageplanewithapolynomial
function(order1or2).
It is possible, within CANEYE to determine the camera + fisheye lens characteristics. For
moreinformation,see5.
UserName:bydefaultwilldisplaytheusernameenvironmentsystemvariableof
thecomputer
Comment: add any useful comment (will be written in the processing report &
outputfiles)
Imagesize:automaticallyfilledinbyCANEYE
Opticalcentre&Projectionfunction:thesecharacteristicscaneitherbeloadedfrom
amatlabfilegeneratedwiththeCANEYEcalibrationmenu(5)ordirectlyprovided
bytheuser(ifyouchoosecreate,seehereafter)
[Link],setto
060(zenithangleshigher>60arenottakenintoaccountduetolargeoccurrence
of mixed pixels in these areas). Do not use a COI value that is outside the domain
usedtocalibratetheprojectionfunction
SubSampleFactor:Iftheimagesaretoonumerousortoolarge,thecomputermay
[Link]
pixelover2(SubSampleFactor=2)oronepixelover3(SubSampleFactor=3).
Angular Resolution in the zenith () and azimuth () directions () determines the
angles for which the Gap Fraction will be computed. Low values induce higher
computation time. By default, it is set to the lowest value (highest resolution) for
bothangles.
FCover (in degrees) defines the size of the solid angle used to compute the cover
fraction([Link]).Bydefaultitissetto10degrees.
FAPAR is computed as the integral of (1gap fraction) during the sun [Link]
latterisdeterminedforagivendayprovidedinthedaynumberoftheyear(defaultis
the acquisition date of the image) at a given latitude in degrees (default is 43 but
youneedtoprovidethelatitudeofyourexperimentsite)
SAVINGDATA:resultscanbewritteninanexcelfileorasciifileorboth
SAVE: click on this button once you have finished filling the parameters. The
parameter file will be saved in a subdirectory called Param_V6 of the directory
where the Can_Eye.Exe file is located. This parameter file can be used later on to
processimagesissuedfromthesamesystem(camera+fisheyelens).Notethatthis
allowsperformingsomeprecomputationsanditwillspeedupfutureprocessing.
Thedefaultparameterfilenameforthespecificwindowshownhereasanexample
is:
P180_2112_2816_Cent_1063_1390_ProjFuncDeg1_COI60_Sub1_Teta5_Phi20_FCov1
[Link]
CreatingProjectionFunctionandOpticalCentreCharacteristics
When pressing the create button in the CALIBRATION PARAMETERS subsection, the user
mustenterthecharacteristicsofthelensusedtoacquiretheimages:
OpticalCentre:locationoftheopticalcentrealongthelines(Y)androws(X)knowing
that the upper left corner of the image has coordinate (1,1) and the lower right
cornerislocatedatthe(NbLines,NbRows)
Projectionfunction:theradiusoftheimage(indegree)isconsideredasapolynomial
function (maximum order = 3) of the distance between a pixel of the image to the
opticalcentre(ifthedegreeofthepolynomialis1,thentheprojectionisassumedto
be polar). Be very careful when entering the coefficient to sort them in the right
order (descending power). Note that after entering the polynomial coefficient, the
polynomialfunctionisplottedsothattheuserisabletocheckifitiscorrect.
This window allows defining all the camera characteristics (field of view), as well as the
characteristics required for the processing:
UserName:bydefaultwilldisplaytheusernameenvironmentsystemvariableof
thecomputer
Comment: add any useful comment (will be written in the processing report &
outputfiles)
Imagesize:automaticallyfilledinbyCANEYE
CCDsensorwidth:[Link]
oftheusefulpartoftheimages,i.e,[Link]
sensor width depends on the camera. Table 1 provide a list of camera models and
corresponding sensor size (sources:
[Link]
[Link]
SubSampleFactor:incaseyouareprocessingveryhighresolutionimages,and/ora
low field of view camera, and/or a high number of image, your computer may be
[Link](SSF)toprocessonlyonepixeloverSSF
oftheimages.
Cell Size: the true leaf area index is computed using the Lang & Xiang average
logarithm method. The Cell Size in pixels corresponds to the size of the cell where
youcomputethelocalLAI
OutputFormat:resultscanbewritteneitherinanexcelfileorasciifileorboth
This window allows defining all the camera characteristics (field of view), as well as the
characteristicsrequiredfortheprocessing:
UserName:bydefaultwilldisplaytheusernameenvironmentsystemvariableof
thecomputer
Comment: add any useful comment (will be written in the processing report &
outputfiles)
Imagesize:automaticallyfilledinbyCANEYE
CCDsensorwidth:[Link]
oftheusefulpartoftheimages,i.e,[Link]
sensor width depends on the camera. Table 1 provide a list of camera models and
correspondingsensorsize(sources:
[Link]
[Link]
SubSampleFactor:incaseyouareprocessingveryhighresolutionimages,and/ora
low field of view camera, and/or a high number of image, your computer may be
[Link](SSF)toprocessonlyonepixeloverSSF
oftheimages.
OutputFormat:resultscanbewritteneitherinanexcelfileorasciifileorboth
Once the processing parameters are defined, a window is opened, showing all the images
contained in the processing directory. Some of the images may not be pertinent for the
processing (fuzzy images for example): they must be eliminated.
TRASH IMAGE: click on this button and select the non pertinent image(s) with the left
mouse button. The selected images will become white, which means that they wont be used
for the processing.
RESET: click on this button to reset the selection (in that case all the images in the directory
are selected).
DONE: click on this button when you have finished to select your pertinent images.
Some parts of the images must not be processed by the user because:
This part of the processing allows masking these areas so that they will not be taken into
account in the image processing.
There is also the possibility to apply a gamma correction factor to the image. This correction
is just a visual correction to help the user to better discriminate between vegetation and
background. It will not impact the model inversion results. Note that will intend also to
implement slope correction described in (Espaa et al., 2008) in a next release.
Once the pertinent images are selected, the user gets access to the following window:
After clicking this button, the user has to select the image he wants to mask (mouse left click
on the desired image). The image then appears solely on the window, with 3 buttons available
MASK:
After clicking on this button, the user has to define a polygon (corresponding to the masked
area on your image) by clicking the left mouse button to indicate the polygon vertices. The
user can then visualize the drawn polygon (see figure below). To end the masking, the user
just clicks on the right mouse button. The user can add as many masks as he wants (by
selecting again the mask button).
Figure 13. Secondary masking window (after having selected the image to be masked)
UNDO:
Clicking on his button cancels the last mask that was designed.
DONE:
Click on this button when you have finished the masking process on the image
Once the masking step is achieved, CAN-EYE runs the image indexation (this allws speeding
the rest of the processing time). The CLASS DEFINITION window allows indicating the way
you intend to achieve the classification with CAN-EYE. You have several choices, depending
if you consider mixed pixels or not.
Once the classes are defined, the classification window appears on the screen. The
classification module is divided in four menus:
Menu 1 (top left): if the user selects TRUE COLOR, then the original images are
displayed. If the users selects CLASSIF, then the original images are displayed with
the classification colours: pixels already classified are shown in their class colour,
except if they are mixed (case of more than 2 classes chosen) or if they belong to the
second class (case of classification without considering mixed pixels). Selecting
HELP leads to display this help page.
outputs. Note that if no radio button is selected, the user can zoom on a
particular image simply by clicking on it (and then choose DISPLAY ALL
to see all the images together).
Menu 3 (top right) :display the palette of all the reduced colours contained in the
images. Colours with red bullets represent more than 5% of all the pixels together,
colours with white bullet represent between 1 and 5% of all the pixels together and
colours without bullet represent less than 1% of the images. During the classification
processed, the colours are organised so that all the colours belonging to a given class
are together in a frame with border of the same colour of the class. For mixed pixels,
there is no border line and they are located at the bottom of the colour palette.
Menu 4 (bottom right): each class name is provided on a button. Clicking on this
button allows changing the colour attributed to the class. This may be used to ease the
classification process. On the left on the class name, round radio buttons are available.
Once he has selected the radio button, the user is invited to click on pixels either on
the image or in the palette that belong to this class. Once the selection is achieved, the
user clicks on the right button to end pixels or colour selection. All the pixel in the
image that have the same colour as the ones previously selected are then classified in
the chosen class. If the user selects the square radio button located at the left of the
class name, he has to select a polygon (same as in the masking process) to force a
whole part of an image to belong to the class, without having an impact on all the
pixels that are not included in this area. This may be useful for example when some
over exposed parts of leaves are classified as sky (since they appear very bright as the
sky in other part of the image) while the user knows that these pixels belong to leaves.
When the user is pleased with his classification and clicks on DONE, CAN-EYE processes
the images to derive the different output variables: LAI, ALA, FAPAR, FCOVER. Figures are
displayed on the screen and are saved in the output directory.
The following sections describe CAN-EYE outputs as well as the theoretical background
allowing the estimations. Table 2 presents the variables that CAN-EYE derives from the set of
digital images.
Introduction
Leaf area index indirect measurement techniques are all based on contact frequency
(Warren-Wilson, 1959) or gap fraction (Ross, 1981) measurements. Contact frequency is the
probability that a beam (or a probe) penetrating inside the canopy will come into contact with
a vegetative element. Conversely, gap frequency is the probability that this beam will have no
contact with the vegetation elements until it reaches a reference level (generally the ground).
The term gap fraction is also often used and refers to the integrated value of the gap
frequency over a given domain and thus, to the quantity that can be measured, especially
using hemispherical images. Therefore, measuring gap fraction is equivalent to measuring
transmittance at ground level, in spectral domains where vegetative elements could be
assumed black. It is then possible to consider the mono-directional gap fraction which is the
fraction of ground observed in a given viewing direction (or in a given incident direction).
The objective of this section is to provide the theoretical background used in the CAN-
EYE software to derive canopy biophysical variables from the bi-directional gap fraction
measured from the hemispherical images.
H
Eq. 2 N ( H , v , v ) G (h, v , v ) l (h) cos v dh
o
where G h, v , v is the projection function, i.e. the mean projection of a unit foliage area at
level h in direction v , v . When the leaf area density and the projection function are
considered independent of the level h in the canopy, Eq. 2 simplifies in Eq. 3:
Eq. 3 N ( L, v , v ) G( v , v ) .LAI cos v
The projection function is defined as follows:
2 2
1
G v , v cos g l , l sin l d l d l (a )
Eq. 4 2 0 0
cos cos cos sin sin cos( ) (b)
v l v l v l
where g ( l , l ) is the probability density function that describes leaf orientation distribution
function. This induces the two normalization conditions given in Eq. 5a and Eq. 5b.
1 2 2
g l , l sin l d l d l 1
2 0 0
(a)
Eq. 5 2 2
1 G v , v sin v d v d v
1
2 0 0
(b)
2
The contact frequency is a very appealing quantity to indirectly estimate LAI because no
assumptions on leaf spatial distribution, shape, and size are required. Unfortunately, the
contact frequency is very difficult to measure in a representative way within canopies. This is
the reason why the gap fraction is generally preferred. In the case of a random spatial
distribution of infinitely small leaves, the gap fraction P , in direction v , v is0 v v
related to the contact frequency by:
This is known as the Poisson model. Conversely to the contact frequency that is linearly
related to LAI, the gap fraction is highly non linearly related to LAI. Nilson (1971)
demonstrated both from theoretical and empirical evidences that the gap fraction can
generally be expressed as an exponential function of the leaf area index even when the
random turbid medium assumptions associated to the Poisson model are not satisfied. In case
of clumped canopies, a modified expression of the Poisson model can be written:
Among existing models, the ellipsoidal distribution is very convenient and widely used
(Campbell, 1986; Campbell, 1990; Wang and Jarvis, 1988): leaf inclination distribution is
described by the ratio of the horizontal to the vertical axes of the ellipse that is related to the
2 2
average leaf inclination angle (ALA variable in CAN-EYE) knowing that l g ( l ) l d l
0
and that g l is the probability density function that verifies the normalization condition (Eq.
5).
Estimating leaf area index and leaf inclination from gap fraction
measurements
Use of a single direction: LAI57
Considering the inclined point quadrat method, Warren-Wilson (1960) has proposed a
formulation of the variation of the contact frequency as a function of the view zenith and
foliage inclination angles. Using this formulation, Warren-Wilson (1963) showed that for a
view angle of 57.5 the G-function (Eq 4) can be considered as almost independent on leaf
inclination (G = 0.5). Using contact frequency at this particular 57.5 angle, Warren-Wilson
(1963) derived leaf area index independently from the leaf inclination distribution function
within an accuracy of about 7%. Bonhomme et al., (1974) applied this technique using the
gap fraction measurements and found a very good agreement between the actual and
estimated LAI values for young crops.
Therefore, for this particular viewing direction, LAI can be easily deduced from gap fraction:
ln(Po(57.5))
Eq 8 Po(57.5)exp(0.5LAI / cos(57.5)) LAI
0.93
The CAN-EYE software proposes an estimate of the LAI derived from this equation, called
LAI57.
Among the several methods described in Weiss et al (2004), the LAI estimation in the
CAN-EYE software is performed by model inversion since, conversely to the use of the
Millers formula, it can take into account only a part of the zenith angle range sampled by
hemispherical images. This is very useful since there is a possibility to reduce the image field
of view to less than 90 zenith. This feature is very important due to the high probability of
mixed pixels in the part of the image corresponding to large zenith view angles. LAI and ALA
are directly retrieved by inverting in CAN_EYE using Eq 6 and assuming an ellipsoidal
distribution of the leaf inclination using look-up-table techniques (Knyazikhin et al., 1998;
Weiss et al., 2000). A large range of random combinations of LAI (between 0 and 10, step of
0.01) and ALA (10 and 80, step of 2) values is used to build a database made of the
corresponding gap fraction values (Eq 6) in the zenithal directions defined by the CAN-EYE
user (parameter window definition during the CAN-EYE processing). The process consists
then in selecting the LUT element in the database that is the closest to the measured Po. The
distance (cost function Ck) of the kth element of the LUT to the measured gap fraction is
computed as the sum of two terms:
Nb _ Zenith _ Dir
2
w i PoLUT ( k ) ( i ) PoMES ( i )
ALA LUT ( k ) 60
CAN-EYE V5.1 : Eq. 7 Jk i 1
(PoMES ( i )) 30
MOD
Second Term
First Term
Nb _ Zenith _ Dir
2
w i PoLUT ( k ) ( i ) PoMES ( i )
PAI LUT ( k ) PAI 57
CAN-EYE V6.1: Eq. 8 Jk i 1
(PoMES ( i ))
MOD
PAI
57
First Term Second Term
The first term computes a weighted relative root mean square error between the measured gap
fraction and the LUT one. The weights wi take into account the fact that some zenithal
directions may contain a lot of masked pixel and therefore, the corresponding gap fraction
may not be very representative of the image:
The relative root mean square error is divided by a modelled standard deviation of the
measured gap fraction derived from the empirical values (PoMES ( i )) computed from the
images corresponding to the same plot for each zenithal direction I, when estimating the
measured gap fraction after the CAN-EYE classification step. In order to smooth zenithal
variations, a second order polynomial is fitted on (PoMES ( i )) to provide MOD (PoMES ( i )) .
The second term of Eq. 7 and Eq. 8 is the regularization term (Combal et al, 2002), that
imposes constraints to improve the PAI estimates. Two equations are proposed:
Constraint used in CAN-EYE V5.1 on the retrieved ALA values that assumes an
average leaf angle close to 60 30.
Constraint used in CAN-EYE V6.1 on the retrieved PAI value that must be close from
the one retrieved from the zenithal ring at 57. This constraint is more efficient and not
does not suppose any assumption than in Eq 7 but it can be computed only when the
57 ring is available (COI60)
The LUT gap fraction that provides the minimum value of Jk is then considered as the
solution. The corresponding LAI and ALA provide the estimate of the measured CAN-EYE
leaf area index and average leaf inclination angle. As there is no assumption about clumping
in the expression of the gap fraction used to simulate the LUT (Eq. 6), the foliage is assumed
randomly distributed, which is generally not the case in actual canopies. Therefore, retrieval
of LAI based on the Poisson model and using gap fraction measurements will provide
estimates of an effective LAI, LAIeff , and corresponding average inclination angle ALAeff that
allows the description of the observed gap fraction assuming a random spatial distribution.
Note that CAN-EYE also proposed other ways of computing PAIeff and ALAeff, using Millers
formula (Miller, 1967), which assumed that gap fraction only depends from view zenith angle
:
/2
Eq. 10 PAI 2 ln(P (
0
o v ) cos v sin v d v
Welles and Norman (1991) proposed a practical method to compute the integral of Eq.10
from gap fraction measurements in several directions for the LAI2000 instrument. CAN-EYE
proposed the effective PAI estimates using both Miller and LAI2000 measurements. For
LAI2000, the ring angular response is taken into account and the computation is made of 3, 4
and 5 rings (Weiss et al., 2004).
The true LAI, that can be measured only using a planimeter with however possible
allometric relationships to reduce the sampling (Frazer et al., 1997), is related to the effective
leaf area index through:
where o is the aggregation or dispersion parameter (Nilson 1971; Lemeur and Blad, 1974) or
clumping index (Chen and Black, 1992). It depends both on plant structure, i.e. the way
foliage is located along stems for plants and trunks branches or shoots for trees, and canopy
structure, i.e. the relative position of the plants in the canopy. The shape and size of leaves
might also play an important role on the clumping.
In CAN-EYE, the clumping index is computed using the Lang and Yueqin (1986)
logarithm gap fraction averaging method. The principle is based on the assumption that
vegetation elements are locally assumed randomly distributed. Each zenithal ring is divided
into groups (called cells) of individual pixels. The size of the individual cells must
compromise between two criterions: it should be large enough so that the statistics of the gap
fraction are meaningful and small enough so that the assumption of randomness of leaf
distribution within the cell is valid. For each cell, Po is computed as well as its logarithm. If
there is no gap in the cell (only vegetation, i.e, Po=0), Po is assumed to be equal to a Posat value
derived from simple Poisson law, using a prescribed LAI sat value. Pocell , as well as
ln(Pocell ) are then averaged over the azimuth and over the images for each zenithal ring. The
averaging still takes into account the masked areas using wi . The ratio of these two quantities
provides the clumping parameter o for each zenithal ring:
o ( , ALA eff )
mean log( PoCell ( ))
log mean (PoCell ( ))
Note that since Posat is simulated using the Poisson model, it depends on the value chosen for
both LAI sat and the average leaf inclination angle, the clumping parameter is computed for the
whole range of variation of ALA and a LAI sat varying between 8 and 12 (Note that all the
results in the CAN-EYE html report are provided for LAI sat 10 . Then the same algorithm, as
described previously for effective LAI (0), is applied by building a LUT using the modified
Poisson model (eq 7) to provide LAItrue and ALAtrue as well as the corresponding clumping
parameter.
LAI or PAI?
Claiming that devices and associated methods based on gap fraction measurements
provide an estimate of the leaf area index is not right since indirect measurements only allow
assessing plant area index. Indeed, it is not possible to know if some leaves are present behind
the stems, branches or trunk. Therefore, masking some parts of the plants (which is possible
using CAN-EYE) to keep only the visible leaves is not correct and could lead to large under-
estimation of the actual LAI value, depending on the way leaves are grouped with the other
parts of the plant. Therefore, all CAN-EYE outputs correspond to plant area index and not
leaf area index.
Eq 12. fCover1Po(0)
Using hemispherical images, it is not possible to get a value in the exact nadir direction, and
the cover fraction must be integrated over a range of zenith angles. In CAN-EYE, the default
value for this range is set to 0-10. The user can change this value when defining the CAN-
EYE parameters (which also concerns the description of the hemispherical lens properties) at
the beginning of the processing.
FAPAR computation
1. The instantaneous black sky fAPAR (fAPARBS): it is the black sky fAPAR at a
given solar position (date, hour and latitude). Depending on latitude, CAN-EYE
computes the solar zenith angle every solar hour during half the day (there is
symmetry at 12:00). The instantaneous fAPAR is then approximated at each solar
hour as the gap fraction in the corresponding solar zenith angle:
fAPAR BS s 1 Po ( s )
2. The daily integrated black sky (or direct) fAPAR is computed as the following:
sunrise
cos( )d
sunset
3. The white sky (or diffuse) fAPAR is computed as the following:
2 2 2
Hemispherical images
DHP results are contained in a subdirectory called CE_P180_xxx, where xxx is the name of
the directory that contains the images to be processed.
zenith angle for the computation of true LAI. Average leaf inclination value as well as
the RMSE between computed and modelled clumping factor are indicated (see here).
MEASURED GAP FRACTION VS LUT GAP FRACTION & AVERAGE PAI,
ALA, FCOVER: top graph shows the RMSE value between the mono-directional gap
fraction computed from the images and the closest one found in the LUT (red line), as
a function of average leaf inclination angle value. The green line shows the
corresponding PAI value (that provides the lowest RMSE) as a function of average
leaf inclination angle. The bottom graph showed the mono-directional gap fraction
estimated from the images (determined by the classification step) as a function of view
zenith angle (green line). The red line indicates the mono-directional gap fraction of
the LUT element that is the closest from the measurements (assuming no clumping
effect, i.e, when estimating PAIeff and ALAeff). In black, the same is shown for the
mono-directional gap fraction when considering the clumping factor (PAItrue, ALAtrue).
Images acquired at 57
Images at 57 results are contained in a subdirectory called CE_P57_xxx, where xxx is the
name of the directory that contains the images to be processed.
5. SUMMARY MODULE
The Summary Menu allows gathering CAN-EYE result processing Excel file in a single one.
After selecting the processing type (DHP=P180, images acquired at 57=P57 and images
acquired at NADIR), the user is asked to choose a directory to proceed. The chosen directory
should contain CAN_EYE processing results sub-directories (all called CE_P180*** or
CE_P57*** depending on the processing type). The user is then asked to provide a name for
the summary excel file in which all the results will be stored.
Hemispherical images
Six sheets are generated
Effective PAI: columns contents are, in order: directory name, CE_P180 file name,
CEV6.1 PAI estimate, CEV5.1 PAI estimate, Miller PAI estimate, LAI2000 3, 4 and
five rings estimates, processing date.
Effective ALA: columns contents are, in order: directory name, CE_P180 file name,
CEV6.1 ALA estimate, CEV5.1 ALA estimate, Miller ALA estimate, LAI2000 3, 4
and five rings estimates, processing date.
True PAI: columns contents are, in order: directory name, CE_P180 file name,
CEV6.1 PAI estimate, CEV5.1 PAI estimate, Miller PAI estimate, LAI2000 3, 4 and
five rings estimates, processing date.
True ALA: columns contents are, in order: directory name, CE_P180 file name,
CEV6.1 ALA estimate, CEV5.1 ALA estimate, Miller ALA estimate, LAI2000 3, 4
and five rings estimates, processing date.
FCOVER: columns contents are, in order: directory name, CE_P180 file name,
CEV6.1 FCOVER estimate, processing date
Daily fAPAR: columns contents are, in order: directory name, CE_P180 file name,
measured direct FAPAR estimate, measured modelled FAPAR estimate, estimated
direct FAPAR estimate, estimated direct FAPAR estimate, processing date
Mean: columns contents are, in order: directory name, CE_NADIR file name, average
FCOVER value
columns contents are, in order: directory name, CE_NADIR file name, image name,
individual FCOVER value for the corresponding image, Processing date
Optical systems are not perfect and at least two main characteristics are required to perform
an accurate processing of hemispherical images:
The coordinates of the optical centre
The projection function. In Can-Eye, the projection function is assumed to be a polar
projection: (angular distances (in degrees) in the object region are proportional to
radial distances in pixel on the image plane).
Because in some situations the focal length may be manipulated by acting on the zoom
system, the projection function must be also known for each focal length used.
Figure 18. Example of the Start sheet of the calibration excel file
The Results sheet contains the calibration parameter of your camera+fish-eye system
derived from the calibration process: the optical centre position as well as the maximum field
of view (FOV in degrees) and the corresponding radius (in pixels) of the image. These values
can be directly entered in the CAN-EYE parameter window during the processing of
hemispherical images (see Filling CAN_EYE window parameter ).
The contents of the two other sheets, as well as the principles of the measurement, are
described in the following.
[X,Y]
[XO,YO]
[Xsize,Ysize]
The optical centre is defined by the projection of the optical axis onto the CCD matrix where
the image is recorded. This point should therefore be invariant by rotation of the system along
this optical axis. A simple method to get the optical centre consists in observing the
coordinates of a point when it rotates along this axis. This could be achieved by drilling a
small hole in the cap of the fish-eye, and acquiring photographs for a series of positions. This
is illustrated by Figure 20. It is possible to use several holes to check the consistency of the
estimation of the optical centre Figure 21.
Hole
Figure 20. Illustration of the holes drilled in the fish-eye cap. The red arrow indicates the
rotation of the cap.
Figure 21. A series of images taken for several positions of the fish-eye cap. In this case, three
holes were considered.
Figure 22 : Example of an excel file sheet to be filled to determine the optical centre of a
system.
Once the Optical Centre sheet is filled, the user must run CAN-EYE, then go in the
Calibration Menu and then select Optical Centre. Can-Eye asks then to select the excel file,
computes automatically the optical centre position and fill the Results sheet.
Figure 23 shows an example of optical centre adjustment in the case where three holes were
considered. Results show very consistent estimates of the coordinates of the optical centre that
is known with accuracy better than one pixel.
Figure 23. Example of a CAN-EYE output , showing the fitting of the circles to the holes
positions in the case of three holes. The actual optical centre is shown by the red cross.
This section describes how to perform the measurements and fill the Projection Function
sheet of the calibration excel file. The experimental design is described in Figure 25. It
consists in a frame of 50*50cm 1.5cm thick from which a 30*30cm square is excavated from
the middle of one of the side. The three sides of this gap were equipped with 30cm long
rulers. The camera is set horizontally as well as the experimental design. The camera is
aligned along the main axis of the design using the front nail and background line.
Hemispherical photographs are taken at two distances (H and H=H+) from the centre of the
design and along the optical axis. The calibration excel file sheet is shown in Figure 25. Note
that it is required to have run the optical centre menu before being able to compute
the projection function.
Figure 24. Example of a projection function sheet of the calibration excel file.
Let us assume that the two images are named Im1(H ) and Im2 (H). Im1 must be the image
for which ruler ticks are the most readable. Then, you have to look at the image using an
image processing software (e.g. Paintshop, Microsoft Photo Editor) to read pixels coordinates
in the image.
Read the optical centre position in cm on the rulers of Im1 and Im2 and fill cells B4, C4
and B5, C5 of the excel file (Projection Function sheet)
The quantity can be easily measured by looking at one direction on the lateral ruler
(Xp1,Yp1, cells C10, D10) on Im1 reading the corresponding value h in cm (cell E10) for
distance H. Then for the distance H+, the same point on Im2 corresponds to a value h (cell
E11) on the lateral ruler. It comes simply that h-h.
On the perpendicular ruler, select two fixed directions (Xp2,Yp2, cells C16, D16) and
(Xp3,Yp3, cells F16, G16) on Im1 read the corresponding values x in cm (cell E16 and H16)
on the perpendicular ruler. Do the same for Im2 (cells E17, H17). It is then possible to
compute the actual distance H if is known.
tan( x ) x / H
H x /( x x' )
tan( x ) x ' /( H )
Once the distance H is known, the calibration of the projection function can be achieved if
the coordinates on one of the 2 images (select the one that is the most readable) are associated
to the actual distance read on the rulers. The coordinates have to be read on the line passing
through the optical axis, for the three rulers. This can be achieved for each cm tick. The
following equations are used to derive the angle from the values read on each ruler:
o For the perpendicular ruler: x arctan(x / H )
o For the lateral rulers : y arctan(W /( H y ))
Therefore, for the different reading in cm on the left lateral rulers (cells A23 to AXX), report the
column number of the pixel in the image (cells B23 to BXX). Perform the same for the
perpendicular and lateral rulers.
Background
alignement
Axisofthedesign
Frontnail
alignement
Perpandicularruler
W x
y
H
x
y
Rightlateralruler
Leftlateralruler
Camera
Figure 26. Example of an image of the experimental design taken with the hemispherical
camera and used for the calibration of the projection function. The horizontal doted yellow line
corresponds to the diameter of the image passing through the optical centre (defined by its
coordinates as measured previously). The camera is aligned thanks to the front nail and
background line.
This process allows computing the coefficient a that relates the radius (in pixel) in the image
to a corresponding viewing direction (in degrees). Then, the maximum field of view of the
camera+fish-eye lens system can be computed by fitting a circle to the image (Figure 27).
Once the Projection Function sheet is filled, the user must run CAN-EYE, go in the
Calibration Menu and then select Projection Function. Can-Eye asks then to select the
excel file, computes automatically the projection function parameters and fill the Results
sheet.
7. REFERENCES
Andrieu, B. and Baret, F., 1993. Indirect methods of estimating crop structure from optical
measurements. In: R.B. C. Varlet-Grancher, H. Sinoquet (Editor), In Crop structure
and light microclimate - Characterization and Applications-. INRA, Paris, France, pp.
285-322.
Andrieu, B. and Sinoquet, H., 1993. Evaluation of structure description requirements for
predicting gap fraction of vegetation canopies. Agric. For. Meteorol., 65: 207-227.
Bonhomme, R., Varlet-Grancher, C. and Chartier, P., 1974. The use of hemispherical
photographs for determining the leaf area index of young crops. Photosynthetica, 8(3):
299-301.
Campbell, G.S., 1986. Extinction coefficients for radiation in plant canopies calculated using
an ellipsoidal inclination angle distribution. Agric. For. Meteorol., 36: 317-321.
Campbell, G.S., 1990. Derivation of an angle density function for canopies with ellipsoidal
leaf angle distributions. Agric. For. Meteorol., 49: 173-176.
Chen, J.M. and Black, T.A., 1992. Defining leaf area index for non-flat leaves. Plant Cell
Environ., 15: 421-429.
Espaa, M.L., Baret, F. and Weiss, M., 2008. Slope correction for LAI estimation from gap
fraction measurements. Agricultural and Forest Meteorology, 148(10): 1553-1562.
Frazer, G.W., Trofymov, J.A. and Lertzman, K.P., 1997. A method for estimating canopy
openness, effective leaf area index, and photosynthetically active photon flux density
using hemispherical photography and computerized image analysis technique. BC-X-
373, Can. For. Serv. Pac. For. Cent. Inf.
Knyazikhin, Y., Martonchik, J.V., Myneni, R.B., Diner, D.J. and Running, S.W., 1998.
Synergistic algorithm for estimating vegetation canopy leaf area index and fraction of
absorbed photosynthetically active radiation from MODIS and MISR data. J.
Geophys. Res., 103(D24): 32257-32275.
Lang, A.R., 1991. Application of some of Cauchy's theorems to estimation of surface areas of
leaves, needles, and branches of plants, and light transmittance. Agric. For. Meteorol.,
55: 191-212.
Miller, J.B., 1967. A formula for average foliage density. Aust. J. Bot., 15: 141-144.
Nilson, T., 1971. A theoretical analysis of the frequency of gaps in plant stands. Agric.
Meteorol., 8: 25-38.
Ross, J., 1981. The radiation regime and architecture of plant stands, The Hague, 391 pp.
Wang, Y.P. and Jarvis, P.G., 1988. Mean leaf angles for the ellipsoidal inclination angle
distribution. Agric. For. Meteorol., 43: 319-321.
Warren-Wilson, J., 1959. Analysis of the spatial distribution of foliage by two-dimensional
point quadrats. New Phytol., 58: 92-101.
Warren-Wilson, J., 1960. Inclined point quadrats. New Phytol., 59: 1-8.
Warren-Wilson, J., 1963. Estimation of foliage denseness and foliage angle by inclined point
quadrats. Aust. J. Bot., 11: 95-105.
Watson, D.J., 1947. Comparative physiological studies in growth of field crops. I: Variation
in net assimilation rate and leaf area between species and varieties, and within and
between years. Ann. Bot., 11: 41-76.
Weiss, M., Baret, F., Myneni, R.B., Pragnre, A. and Knyazikhin, Y., 2000. Investigation of a
model inversion technique to estimate canopy biophysical variables from spectral and
directional reflectance data. Agronomie, 20: 3-22.
Weiss, M., Baret, F., Smith, G.J. and Jonckheere, I., 2003. Methods for in situ leaf area index
measurement, part II: from gap fraction to leaf area index: retrieval methods and
sampling strategies. Agric. For. Meteorol., submitted (August 2002).
Weiss, M., Baret, F., Smith, G.J. and Jonckheere, I., 2004. Methods for in situ leaf area index
measurement, part II: from gap fraction to leaf area index: retrieval methods and
sampling strategies. Agric. For. Meteorol., 121: 17-53.
Welles, J.M. and Norman, J.M., 1991. Instrument for indirect measurement of canopy
architecture. Agronomy J., 83(5): 818-825.