Page - 302 - in Short-Term Load Forecasting by Artificial Intelligent Technologies
Image of the Page - 302 -
Text of the Page - 302 -
Energies2018,11, 1561
4.3. Experiment I:CaseswithLargerWidthCoefficients
In thisexperiment,weset the intervalwidthcoefficientα=0.05,which isequivalent tosetting
theoutput to [0.95×X,X,1.05×X] forasinglesample in the trainingprocessof theneuralnetwork.
Basedonthisstructure, thePIscanbeoutputgivenaninputtestset. Inordertoguaranteethediversity
of thesamples,westudiedfourdifferentquarterlydata for fourdifferentstates.
Themodels involvedinourresearchcanbedividedinto threegroups forbetterexplanations for
the impactofdifferentcomponents. ThefirstgroupincludedLUBEandE–LUBE,andthedifference
between themwere thestructuresof theneuralnetwork. ThestructureofLUBEconsistedof three
layerswhichweresimilar to the traditionalBPneuralnetwork.Moreover, in theE–LUBE,anextra
context layerwas added to the structure so thatwe couldvalidate the impact of the context layer
inpredictionbycomparing theperformanceof these twomodels. The secondgroup included the
PO–E–LUBEandIO–E–LUBE,andthedifferencebetweenthemincludedtheoptimizationalgorithm
in the trainingprocess. PO–E–LUBEused the error andvariance of point prediction to construct
thecost function inMOSSA,wherebythe targetofminimizingthecost functioneffectivelydenotes
arequirement forbetterpredictionaccuracy. Inaddition, IO–E–LUBEemployedtheCPandPIWof
the intervalpredictiontoconstruct thecost function inmulti-objectiveoptimization,while the target
ofminimizing sucha cost functiondenoted the requirements for a better performance in interval
coverage,which ismorerational forourgoalof intervalprediction. Thecomparisonbetweensuch
models can reflect the influence of different cost functions in theparameter optimizationprocess.
Furthermore, inthefirstgroup, theparametersof theneuralnetworkaredeterminedbyaconventional
gradientdescentalgorithm,andin thesecondgroup, theparametersaredeterminedbyaheuristic
optimizationalgorithm.Therefore, the impactofdifferentoptimizationalgorithmscanbeshownby
comparingthemodels indifferentgroups. Inaddition, in the thirdgroup, thedatapreprocessing is
introduced. Basedon themodels in thefirst twogroups,CEEMDANwasused to refine the input
dataset. The resultsof themodels in thisgroupwilldisplay theeffectofdatapreprocessing in the
hybridmodel.
The simulation results are shown inTables 2 and3. Also shown inFigure 5are theprincipal
indicesof intervalprediction,namely,CPandPINAW.Basedontheconductedcomparisonsreferred
toearlier, several conclusionscanbe inferred:
(1) By comparing themodels in thefirst group,wecanconclude that theE–LUBE is superior to
LUBEinmostcases, suchas the fourthquarter inNSWandthefirstquarter inTAX,asshownin
Table2andFigure5. TheCPofE–LUBEreached87.17%,while theCPofLUBEwas72.36%for
the fourthquarter inNSW.Therateof improvementwasmore than15%with themaintenanceof
PINAWandPINRW.However, insomecases, the improvement isnot remarkable, suchas the
fourthquarter inQLD,asshowninTable3andFigure5. Theperformancesof these twomodels
arealmost thesame. Ingeneral, theperformanceofE–LUBEisbetter thanLUBE,whichmeans
thatE–LUBEwithanextra context layer can improve theperformance. In theory, thecontext
layers are able toprovidemore information compared toprevious outputs of hidden layers.
Thissuperiorityhasbeenprovedinourexperiments.However,owingtothe instabilityof the
parameters in theneuralnetwork, the improvement isnotadequatelyremarkable ina fewcases.
(2) Intermsoftheoptimizationmethods,andaccordingtotheresultsshowninFigure5,andTables2
and3, theCPsof thesecondgroup(PO–E–LUBEandIO–E–LUBE)performbetter thanE–LUBE
inmostcases. E–LUBEusesthegradientdescentalgorithm,whichissensitivetotheinitialization,
inorder toobtain theparameters inNN.Furthermore, themodels in thesecondgroupuse the
heuristic swarmoptimization algorithmwhich can synthesize the initialization results using
anadequatepopulationsize. Thus, themodels in thesecondgroupsshouldhaveelicitedbetter
performancesintheoryunlesstherandominitializationsofE–LUBEareperfect.Moreover,within
thesecondgroup, IO–E–LUBEhasalargerCPvaluethanPO–E–LUBE,withlowlevelsofPINAW
andPINRW.It is just the influenceof thecost functionthatmakessuchadifference. Themain
302
Short-Term Load Forecasting by Artificial Intelligent Technologies
- Title
- Short-Term Load Forecasting by Artificial Intelligent Technologies
- Authors
- Wei-Chiang Hong
- Ming-Wei Li
- Guo-Feng Fan
- Editor
- MDPI
- Location
- Basel
- Date
- 2019
- Language
- English
- License
- CC BY 4.0
- ISBN
- 978-3-03897-583-0
- Size
- 17.0 x 24.4 cm
- Pages
- 448
- Keywords
- Scheduling Problems in Logistics, Transport, Timetabling, Sports, Healthcare, Engineering, Energy Management
- Category
- Informatik