A Lightweight Method For Grape Berry Counting Based On Automated 3D Bunch Reconstruction From A Single Image PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4
 
A Lightweight Method for Grape Berry Counting based on Automated3D Bunch Reconstruction from a Single Image
Scarlett Liu
1
, Mark Whitty
2
and Steve Cossell
3
 Abstract
Berry counting is an integral step towards grapevine yield estimation. As a traditional yield estimation step,counting berry by human hand is tedious and time consuming.Recent methods have approached this using specialized stereocameras and lighting rigs which are impractical for a large scalefield application. This paper presents a lightweight method forgenerating a representative 3D reconstruction of an individualgrape bunch from a single image from one side of the bunch.The results were poor prior to the application of a sparsityfactor to compensate for bunches of varying sparsity, with thefinal result being an absolute average accuracy of 87.6% andaverage error of 4.6%, with an
 R
2
value of 0.85. These resultsshow promise for
 in vivo
 counting of berry numbers in a non-computationally expensive manner.
Keywords
: Grape, Berry, Viticulture, Image Processing, 3DBunch ReconstructionI. INTRODUCTIONYield estimation in viticulture is notorious for producingpoor estimates due to range of sampling factors and de-pendency on subjective interpretation of the state of vinematurity. This poor estimation costs hundreds of millionsof dollars each year in contract adjustments, harvest logisticmanagement, oak barrel purchases and tank space allocationamongst others. The structure of vineyards means aerialimagery is only able to contribute a small amount to theyield estimation, and other on ground estimation methods aretime consuming. Recent work by Nuske [1] in the US hasshown the potential for image processing to speed up thisanalysis as well as generate unbiased estimates which areorders of magnitude smaller than manual estimates, leadingto substantial cost savings.As to traditional yield estimation in vineyards, berry num-ber is a critical parameter for early forecasting productionsince the number of berries remains stable after fruit setting[2]. Also the ratio between of berry number per bunch andbunch size is one of many factors governing the quality of the fruit at harvest. At current vineyards, counting berry isaccomplished by hand, which is work intensive and timeconsuming. [3], [4], [5] demonstrated the advantages of image processing on yield components analysis for the sakeof saving time and energy for grape production forecast. [6],
1
Scarlett Liu is with School of Mechanical and Manufacturing,University of New South Wales, 2052 Sydney, Australia
sisi.liu@unsw.edu.au
2
Mark Whitty is with School of Mechanical and Manufacturing,University of New South Wales, 2052 Sydney, Australia
m.whitty@unsw.edu.au
3
Steve Cossell is with School of Mechanical and Manufac-turing, University of New South Wales, 2052 Sydney, Australia
scos506@gmail.com
[4] applied image processing techniques for berry countingone side of a bunch, achieving average
 R
2
value of 0.92 and0.82 between actual berries and detected berries per bunch.However, the image processing algorithm proposed in paper[6] can not be utilised after v´eraison since the reflection onberry skin is affected by pruine (which causes matte surfaceon berries on both green and purple grapes). As the work presented by Diago [4], a dataset with 70 bunches from 7 va-rieties was tested, with a
 R
2
value varying from 0.62 to 0.95based on 10 bunches for each variety (0.817 for 7 caltivarsin average). Leaving the image techniques described by theauthor alone, 10 bunches is not representative for validatingimage processing procedure in one cultivar. Especially forCabernet Sauvignon as well as Shiraz which are famous fornon-uniform bunch shape, [4] obtained the lowest
 R
2
valuewith 0.62 based a single image of Cabernet Sauvignon from7 cultivars.Except detecting berries from one side by processing oneimage, other work [5], [7] showed the advantages of perform-ing 3D reconstruction of grape bunches for the purpose of estimating the number of grapes in a bunch by stereo images.Their accuracy improved achieved an
 R
2
value of 0.78 op-posed to more traditional 2D estimation techniques [3] whichhave been a staple for the image processing community [8],[9]. Their 3D reconstruction relies on substantial manualinput (semi-automatic) for each bunch, which is tediouseven given an impressive user interface and thus cannotbe applied on a large scale for reliable yield estimation.As to the scope of experiment, data sets in paper [5], [7]are small, 10 bunches from one cultivar (10 cultivars) and20 bunches from 14 vines in one block, respectively. Also
R
2
achieved in both paper are 0.71 and 0.78, which is notsatisfied for practical implementation in current vineyards.In addition, a specialized stereo camera arrangement wasrequired, along with controlled lighting conditions, limitingthe applicability to
 ex vivo
 analysis. Stereo cameras alsohave a minimum range which restricts the level of detailwhich may be achieved by moving closer, meaning in fieldapplication within the confines of a sprawling canopy isimpractical.In order to increase of these image processing methods,low cost and simpler solutions are needed that can be appliedby farmers on the ground. Thus objective of this paper is toform a representative 3D reconstruction of grape bunchesfrom a single image for the purpose of accurate berrycounting. The use of a single image only is a key feature,which simplifies the data capture process and keeps thecost manageable, to the point where cameras such as those
ad
 
contained in current smart phones can be used. The lack of equipment overhead is particularly attractive to farmers indifficult economic environments.In the remainder of this paper,
 Section II
 details theproposed method before
 Section III
 describes the experi-mental data and methods used to validate the method. Resultsfollow in
 Section IV
 before conclusions are drawn andrecommendations are made for future work in
 Section V
.II. F
UNDAMENTALS
Two major image processing components form the basisof the berry counting method. First, a 3D reconstruction of the bunch is formed to give an initial estimate of the numberof berries. A sparsity factor is then calculated from the colorof the berries and used to generate a final estimate of thenumber of berries. This paper is based on three assumptions:1) The actual number of berries in a bunch is equal to thenumber of berries that fit in a volumetric shell derivedfrom a single image of a bunch.2) All sizes of invisible berries follow the normal distribu-tion of sizes of visible berries same bunch / sub-bunch.3) Sparse factor has an effect on estimating the numberof berries per bunch.
 A. 3D reconstruction and initial estimate of berry number 
Given an image of a single bunch of grapes, the outline of the bunch is extracted from R channel by Otsu’s method [10].The image is rotated until its major axis is approximatelyvertical. Each point on the outline is considered a candidateberry location to which a circle is fitted using a Houghtransform, as demonstrated in
 Fig.1
. These circle locationsand diameters are used to seed the 3D model by placingspheres of corresponding diameter in a single plane normalto the direction of view. For addressing overlapping of berriesat the edge of a bunch, neighbours searching within specificdistance is applied for finding two berries that have extrememetrics within this distance. Then the berry with largestmetric is moved forward in z-direction (normal to the paperplane) pixel by pixel while the berry with smallest metric ismoved backward until there is no overlapping.Beginning at the top detected berry, the 3D model ispopulated using the following process until the bottom of the bunch is reached:1) Find the first and last pixel of a horizontal sectionthrough the image and subtract the diameter of oneberry.2) Revolve this section about a vertical axis through itscentre, forming a virtual circle.3) Randomly pick a sphere diameter within the observedrange of berry diameters.4) Moving around the circumference of the virtual circle,attempt to place a new sphere at regular (1 degree)intervals.5) At each interval, place a sphere at that location on thecircumference only if no intersection with any existingsphere is detected.Fig. 1: Berries as seeds on the edge of a bunchFig. 2: 3D reconstruction by a single image6) Move down a defined step size (in this paper, stepsize is two pixels) and repeat. Once the model is fullypopulated, the number of berries is tallied and denotedas Initial Berry Number (
IBN 
).As to
 step 3)
 above, Hough Transform is applied on aimage of a bunch to detect all visible berries and a normaldistribution model is built based on all radius of detectedberries. Then in aforementioned
 step 3)
, a radius is randomlygenerated by the built normal distribution.
 Assumption 1)
and
 2)
 are embedded here.
 Fig.2
 illustrated examples of sin-gle images and the corresponding shaded 3D reconstructions.
 B. Sparsity factor 
Assumption 1)
 refers a convex hull for a healthy andcompact grape bunch. However, there are not always com-pact bunches so that
 IBN 
 is not accurate enough for abunch with loosen pattern by applying aforementioned imageprocessing techniques. Hence in this paper, a sparse factor(
SF 
) is proposed for defining the compactness of a bunch.This indicator will be used for final estimation of berrynumber.
 Assumption 3)
 is embedded here. In order to get
SF 
 for each bunch, each image was processed according tothe following sequence of operations:

Unlock this document

Upload a document to download this document or subscribe to read and download.

or

Unlock this page after an ad

24
ad
 
Fig. 3: Sparsity Factor Calculation1) Automatically crop image to the outline of the bunchas detect above.2) Automatically threshold the Red channel to obtain abinary image using Otsu’s method.3) Automatically threshold the Saturation (from HSV)channel to obtain a binary image using Otsu’s method.4) Calculate the area of each of these two thresholdedimages, giving
 AR
 and
 AS 
, as shown in
 Fig.3
5) Calculate the sparsity factor according to:
 SF 
 =(
AR
AS 
)
/AR
C. Final estimate of berry numbe
The sparsity factor is then used to improve the estimationof the number of berries through the following formula:
BN 
 =
 S
 ×
IBN 
 (1)where
 B
 is the final estimate of the number of berries.III. E
XPERIMENTS
Data was collected by viticulturists at Treasury WineEstates, Camatta Hills, California in September and October2013. Photographs were taken of a total of 112 individualbunches randomly comprised of two red grape varietiesShiraz and Cabernet Sauvignon.Images were captured at a resolution of 3968 x 2976pixels using a consumer grade compact camera (OlympusSP600UZ) on automatic mode with the flash turned on.These images were then processed using Matlab accordingto the method in
 Section II
. Firstly, a 3D reconstructionof each bunch was generated from a single image of thatbunch, providing an initial estimate of the number of berries.Secondly, the sparsity factor for each bunch was calculatedand applied to the initial estimate to obtain a final estimateof the number of berries.Each bunch was then de-constructed, with manual countsof the number of berries on each bunch being recorded. Inaddition, the diameters of a small number of berries on eachbunch were measured. The number of berries was comparedwith the final estimate from the proposed method, and thefollowing metrics calculated:
Predicted Berry Number without SF0 20 40 60 80 100120140160180
   R  e  a   l   B  e  r  r  y   N  u  m   b  e  r
020406080100120140160180R-square: 0.63
Fig. 4: Initial berry number estimation, with average absoluteerror 23.1%
Average absolute error
: Taking the absolute values of the differences between the actual and estimated number of berries divided by the actual number of berries and thenaveraging these differences over all bunches.
Accuracy
: 1 minus average absolute error.
Average error
: Taking the values of the differences be-tween the actual and estimated number of berries dividedby the actual number of berries and then averaging thesedifferences over all bunches.
R
2
value
: Based on a linear correlation between the actualand estimated number of berries.IV. R
ESULTS
Fig.4
 shows the relationship between the actual and ini-tially estimated berry number.
 Fig.5
 shows the relationshipbetween the actual and finally estimated berry number. Thesparsity factor ranged from 0.32 to 0.89.On the dataset of 112 images described above, an averageabsolute value error of 12.4% (accuracy is 87.6%) wasachieved and average error is 4.6%. The proposed methodgenerated an
 R
2
value of 0.63 using the initial estimate of berry number. Following application of the sparsity factor, afinal
 R
2
value of 0.85 was achieved.The processing time was approximately 0.5 seconds perimage, prior to any optimization.This method fits berries to the outer profile of the bunch,which matches in field observations as to the structure of realgrape bunches and produces aesthetically pleasing models.It is notable that larger bunches induced larger errors in themethod, most likely due to a larger number of interior berries.A comparison about proposed method and other threeberry counting methods is demonstrated in
 table I
. In termsof processing type, the proposed method in [4] requirescalibrating the relationship between visible and invisibleberries in a testing data set, while approaches presented inpaper [5], [7] need human interaction with software. So thesethree methods are not totally automatic while the proposed

Unlock this document

Upload a document to download this document or subscribe to read and download.

or

Unlock this page after an ad

4
ad
 
Predicted Berry Number with SF0 20 40 60 80 100120140160
   R  e  a   l   B  e  r  r  y   N  u  m   b  e  r
020406080100120140160R-square: 0.85
Fig. 5: Final berry number estimation by multiplying sparsefactor, with average absolute error 12.4%approach can estimate berry number based on one image of a bunch straight away.TABLE I: Comparison of performance with other’s works
Method Cultivar Data Set Size Type
 R
2
Diago [4] CabernetSauvignon 10 bunches Semi-auto 0.62Ivorra [5] 10 cultivars 100 bunches Semi-auto 0.71Herrero [7] Tempranillo 20 bunches Semi-auto 0.78ProposedMethodCabernetSauvignonand Shiraz112 bunches Automatic 0.85
V. CONCLUSIONSThis paper has presented a lightweight method for es-timating the 3D structure of grape bunches from a singleimage. Experiments on two varieties of red grapes showedan average absolute accuracy of 87.3% relative to the actualnumber of berries on a bunch. The method achieved an
R
2
value of 0.85 using a linear relationship between theestimated and actual number of berries. These results wereobtained with nothing more than a standard compact camera.The proposed 3d model based on one image also works ona bunch with distinguishing shoulder, as shown in the secondrow of 
 Fig 6
. But it cannot achieved a good estimation of berry numbers on a bunch with overlapping shoulders. Alsothis work is limited to purple bunch since the sparse factoris achieved by color operation. Future work will focus onextending this work to green grapes and more bunch shapes,and fitting visible berries into the exact position in its 3Dreconstruction model. In addition, comparison of the resultswith analysis of the same bunches as photographed
 in vivo
is expected to demonstrate the viability of the method forreliably counting the number of berries and in turn estimatingblock yield.The processing time may also be improved by using alarger distance between horizontal sections as per step (f) inFig. 6: 3D model of a bunch with distinguishing shoulder
Section II-A
. Some varieties of grapes elongate noticeablyfollowing v´eraison, and this method could be extendedto fitting ellipses and reconstruction using correspondingellipsoids. Furthermore, the 3D structure may be used forlarge scale analysis of the bunch structure, as it allows rapidestimation of many bunch parameters which are tedious tocalculate via existing manual methods.ACKNOWLEDGMENTThe authors would like to thank Will Drayton, FranciDewyer, Joseph Geller and Angus Davidson from TreasuryWine Estates in collecting the raw images used in this paper.One author is partly supported by the Chinese ScholarshipCouncil.R
EFERENCES[1] S. Nuske, K. Gupta, S. Narasimhan, and S. Singh, “Modeling andCalibrating Visual Yield Estimates in Vineyards,
 Field and Service Robotics
, pp. 343–356, 2012.[2] S. Martin, R. Dunstone, and G. Dunn, “How to forecast wine grapedeliveries using grape forecaster excel workbook version 7,Tech.Rep., 2003.[3] S. Liu, S. Marden, and M. Whitty, “Towards Automated Yield Esti-mation in Viticulture,” in
 Proceedings of Australasion Conference on Robotics and Automation
. Sydney, Australia: araa.asn.au, 2013, pp.2–4.[4] M. P. Diago, J. Tardaguila, N. Aleixos, B. Millan, J. M. Prats-Montalban, S. Cubero, and J. Blasco, “Assessment Of Cluster YieldComponents By Image Analysis,”
 Journal of the Science of Food and  Agriculture
, July 2014.[5] E. Ivorra, A. S´anchez, J. Camarasa, M. Diago, and J. Tardaguila,“Assessment of grape cluster yield components based on 3d descriptorsusing stereo vision,”
 Food Control
, vol. 50, pp. 273–282, 2015.[6] M. Grossetete, Y. Berthoumieu, J.-P. Da Costa, C. Germain,O. Lavialle, G. Grenier,
 et al.
, “Early estimation of vineyard yield:site specific counting of berries by using a smartphone.” in
 InfomationTechnology, Automation and Precision Farming. International Confer-ence of Agricultural Engineering-CIGR-AgEng 2012: Agriculture and  Engineering for a Healthier Life, Valencia, Spain, 8-12 July 2012
.CIGR-EurAgEng, 2012, pp. C–1915.[7] M. Herrero-Huerta, D. Gonz´alez-Aguilera, P. Rodriguez-Gonzalvez,and D. Hern´andez-L´opez, “Vineyard yield estimation by automatic 3dbunch modelling in field conditions,
 Computers and Electronics in Agriculture
, vol. 110, pp. 17–26, 2015.[8] R. Chamelat, E. Rosso, A. Choksuriwong, C. Rosenberger, H. Laurent,and P. Bro, “Grape Detection By Image Processing,” in
 IECON 2006 - 32nd Annual Conference on IEEE Industrial Electronics
. IEEE,Nov. 2006, pp. 3697–3702.[9] M. J. C. S. Reis, R. Morais, E. Peres, C. Pereira, O. Contente,S. Soares, A. Valente, J. Baptista, P. J. S. G. Ferreira, and J. Bulas Cruz,“Automatic detection of bunches of grapes in natural environment fromcolor images,”
 Journal of Applied Logic
, vol. 10, no. 4, pp. 285–290,2012.[10] N. Otsu, “A threshold selection method from gray-level histograms,
 Automatica
, vol. 11, no. 285-296, pp. 23–27, 1975.

Unlock this document

Upload a document to download this document or subscribe to read and download.

or

Unlock this page after an ad

9

Reward Your Curiosity

Everything you want to read.
Anytime. Anywhere. Any device.
No Commitment. Cancel anytime.
ad
576648e32a3d8b82ca71961b7a986505