Seite - 272 - in Short-Term Load Forecasting by Artificial Intelligent Technologies
Bild der Seite - 272 -
Text der Seite - 272 -
Energies2018,11, 1605
of training(i.e., cross-validation).Moreover,eachmodelofferedpredictionresults pi(i∈1,2,. . . ,n)
whichwere thencast intoasecondleveldata; theoutcomebecamethe input for thesecondlevelas
trainingdata.
2.2.1. EnsemblePruning
Ranking-basedsubset selectionmethodranks thecandidatemodelsaccordingtocriteria, suchas
themeanabsolutepercentageerror (MAPE),directionalaccuracy (DA),andEuclideanDistance (ED),
andincludedonly the topnmodels fromall candidatemodels.
2.2.2. Ensemble Integration
This stepdescribes how the selectingmodelswere combined into ensemble forecast. In this
context, the stackingmethod isused tobuild the second leveldata, stackingusesa similar idea to
K-folds cross-validation to solve twosignificant issues: Firstly, to createout-of-samplepredictions.
Secondly, to capture distinct regions,where eachmodel performs the best. The stackingprocess
investigates by inferring the biases of the generalizers concerning theprovidedbase learning set.
Then, stacked regression using cross-validation was used to construct the ’good’ combination.
Considera linearstackingfor thepredictiontask. Thebasic ideaofstackingis to ’stack’ thepredictions
f1, . . . , fmbylinearcombinationwithweights ai, . . . ,(i=1,. . . ,m):
fstacking(x)= m
∑
i=1 ai fi(x), (1)
where theweightvectora is learnedbyameta-learner.
2.2.3. EnsemblePrediction
Thesecondlevel learnermodel(s) canbe trainedontheD′data toproduce theoutcomeswhich
will beused forfinal predictions. In addition, to selectmultiple sub-learners, stacking allows the
specificationofalternativemodels to learnhowtobestcombine thepredictions fromthesub-models.
Becauseameta-model isusedtocombinethepredictionsofsub-modelsbest, thismethodissometimes
termedblending,as inmixingthefinalpredictions.
Inbrief, Figure1demonstrated thegeneral structureofSMLEframework,whichconsistedof
various learning steps, after applying this scheme, three SMLEmodelsweregenerated,while the
differencebetweentheSMLEmodelswerenot instructure,but in the typeofbasemodel in level#0
andthedifferencesbetweenthe threemodels in thepartofbasemodelcanbeexplainedas follows:
• 1stSMLEinbase layerusedSVRlearnerandinMeta layerLRusedasmeta learner.
• 2ndSMLEinbase layerusedBPNNlearnerandinMeta layerLRusedasmeta learner.
• 3rdSMLEinbase layerusedSVRandBPNNlearnersandinMeta layerLRusedasmeta learner.
2.3. ExperimentStudyDesign
2.3.1.Data
TheGOCdatawereusedasbenchmarkdata; thisdatasetwasdownloadedfromthewebsite:
https://www.bp.com/en/global/corporate/energy-economics/statistical-review-of-worldenergy.html.
ThedatarepresentedtotalOCintheworld; thedatawasyearly typeandhadadurationfrom1965 to
2016. Thedataconsistedof twofactors, thusdependentvariableoil consumption(inMillionTonnes),
whichwasa featureover time, anddate (inyears)was the independentvariable in this case study.
Therefore, the OC time series for this experiment had 52 data points. For a better explanation,
wevisualizedwholeactual timeseries inFigure2,withabluecircle incurve.
272
Short-Term Load Forecasting by Artificial Intelligent Technologies
- Titel
- Short-Term Load Forecasting by Artificial Intelligent Technologies
- Autoren
- Wei-Chiang Hong
- Ming-Wei Li
- Guo-Feng Fan
- Herausgeber
- MDPI
- Ort
- Basel
- Datum
- 2019
- Sprache
- englisch
- Lizenz
- CC BY 4.0
- ISBN
- 978-3-03897-583-0
- Abmessungen
- 17.0 x 24.4 cm
- Seiten
- 448
- Schlagwörter
- Scheduling Problems in Logistics, Transport, Timetabling, Sports, Healthcare, Engineering, Energy Management
- Kategorie
- Informatik