Transactions of the Association for Computational Linguistics, vol. 3, pp. 449–460, 2015. Action Editor: Diana McCarthy.
Submission batch: 5/2015; Revision batch 7/2015; Published 8/2015.
2015 Association for Computational Linguistics. Distributed under a CC-BY 4.0 Licence.
c
(cid:13)
Context-awareFrame-SemanticRoleLabelingMichaelRothandMirellaLapataSchoolofInformatics,UniversityofEdinburgh10CrichtonStreet,EdinburghEH89AB{mroth,mlap}@inf.ed.ac.ukAbstractFramesemanticrepresentationshavebeenusefulinseveralapplicationsrangingfromtext-to-scenegeneration,toquestionanswer-ingandsocialnetworkanalysis.Predictingsuchrepresentationsfromrawtextis,how-ever,achallengingtaskandcorrespondingmodelsaretypicallyonlytrainedonasmallsetofsentence-levelannotations.Inthispa-per,wepresentasemanticrolelabelingsys-temthattakesintoaccountsentenceanddis-coursecontext.Weintroduceseveralnewfea-tureswhichwemotivatebasedonlinguisticinsightsandexperimentallydemonstratethattheyleadtosignificantimprovementsoverthecurrentstate-of-the-artinFrameNet-basedse-manticrolelabeling.1IntroductionThegoalofsemanticrolelabeling(SRL)istoiden-tifyandlabeltheargumentsofsemanticpredicatesinasentenceaccordingtoasetofpredefinedre-lations(e.g.,“who”did“what”to“whom”).Inadditiontoprovidingdefinitionsandexamplesofrolelabeledtext,resourceslikeFrameNet(Ruppen-hoferetal.,2010)groupsemanticpredicatesintoso-calledframes,i.e.,conceptualstructuresdescribingthebackgroundknowledgenecessarytounderstandasituation,eventorentityasawholeaswellastherolesparticipatinginit.Accordingly,semanticrolesaredefinedonaper-framebasisandaresharedamongpredicates.Inrecentyears,framerepresentationshavebeensuccessfullyappliedinarangeofdownstreamtasks,includingquestionanswering(ShenandLapata,2007),text-to-scenegeneration(Coyneetal.,2012),stockpriceprediction(Xieetal.,2013),andso-cialnetworkextraction(Agarwaletal.,2014).WhereassometasksdirectlyutilizeinformationencodedintheFrameNetresource,othersmakeuseofFrameNetindirectlythroughtheoutputofSRLsystemsthataretrainedondataannotatedwithframe-semanticrepresentations.Whilead-vancesinmachinelearninghaverecentlygivenrisetoincreasinglypowerfulSRLsystemsfollow-ingtheFrameNetparadigm(Hermannetal.,2014;T¨ackstr¨ometal.,2015),littleefforthasbeendevotedtoimprovesuchmodelsfromalinguisticperspec-tive.Inthispaper,weexploreinsightsfromthelin-guisticliteraturesuggestingaconnectionbetweendiscourseandrolelabelingdecisionsandshowhowtoincorporatetheseinanSRLsystem.Althoughearlytheoreticalwork(Fillmore,1976)hasrecog-nizedtheimportanceofdiscoursecontextfortheassignmentofsemanticroles,mostcomputationalapproacheshaveshiedawayfromsuchconsidera-tions.Toseehowcontextcanbeuseful,considerasanexampletheDELIVERYframe,whichstatesthataTHEMEcanbehandedofftoeitheraRECIPIENTor“moreindirectly”toaGOAL.Whilethedistinc-tionbetweenthelattertworolesmightbeclearforsomefillers(e.g.,peoplevs.locations),thereareoth-erswherebothrolesareequallyplausibleandaddi-tionalinformationisrequiredtoresolvetheambigu-ity(e.g.,countries).IfwehearaboutaletterbeingdeliveredtoGreece,forinstance,reliablecuesmightbewhetherthesenderisapersonoracountryand
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
e
d
toi
/
t
un
c
je
/
je
un
r
t
je
c
e
–
p
d
F
/
d
o
je
/
.
1
0
1
1
6
2
/
t
je
un
c
_
un
_
0
0
1
5
0
1
5
6
6
7
8
8
/
/
t
je
un
c
_
un
_
0
0
1
5
0
p
d
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
450
whetherGreecereferstothegeographicregionortotheGreekgovernment.Theexampleshowsthatcontextcangenerallyin-fluencethechoiceofcorrectrolelabel.Accordingly,weassumethatmodelingcontextualinformation,suchasthemeaningofawordinagivensituation,canimprovesemanticrolelabelingperformance.Tovalidatethisassumption,weexploredifferentwaysofincorporatingcontextualcuesinaSRLmodelandprovideexperimentalsupportthatdemonstratestheusefulnessofsuchadditionalinformation.Theremainderofthispaperisstructuredasfol-lows.InSection2,wepresentrelatedworkonse-manticrolelabelingandthevariousfeaturesappliedintraditionalSRLsystems.InSection3,weprovideadditionalbackgroundontheFrameNetresource.Sections4and5describeourbaselinesystemandcontextualextensions,respectivement,andSection6presentsourexperimentalresults.Weconcludethepaperbydiscussinginmoredetailtheoutputofoursystemandhighlightingavenuesforfuturework.2RelatedWorkEarlyworkinSRLdatesbacktoGildeaandJuraf-sky(2002),whowerethefirsttomodelroleassign-menttoverbargumentsbasedonFrameNet.Theirmodelmakesuseoflexicalandsyntacticfeatures,includingbinaryindicatorsforthewordsinvolved,syntacticcategories,dependencypathsaswellaspo-sitionandvoiceinagivensentence.Mostsubse-quentworkinSRLbuildsonGildeaandJurafsky’sfeatureset,oftenwiththeadditionoffeaturesthatdescriberelevantsyntacticstructuresinmorede-tail,e.g.,theargument’sleftmost/rightmostdepen-dent(JohanssonandNugues,2008).Moresophisticatedfeaturesincludetheuseofconvolutionkernels(Moschitti,2004;Croceetal.,2011)inordertorepresentpredicate-argumentstructuresandtheirlexicalsimilaritiesmoreaccu-rately.Beyondlexicalandsyntacticinformation,afewapproachesemployadditionalsemanticfea-turesbasedonannotatedwordsenses(Cheetal.,2010)andselectionalpreferences(Zapirainetal.,2013).DeschachtandMoens(2009)andHuangandYates(2010)usesentence-internalsequencein-formation,intheformoflatentstatesinahiddenmarkovmodel.Morerecently,afewapproaches(RothandWoodsend,2014;Leietal.,2015;FolandandMartin,2015)explorewaysofusinglow-rankvectorandtensorapproximationstorepresentlex-icalandsyntacticfeaturesaswellascombinationsthereof.Tothebestofourknowledge,thereexistsnopriorworkwherefeaturesbasedondiscoursecon-textareusedtoassignrolesonthesentencelevel.Discourse-likefeatureshavebeenpreviouslyap-pliedinmodelsthatdealwithso-calledimplicitar-guments,i.e.,roleswhicharenotlocallyrealizedbutresolvablewithinthegreaterdiscoursecontext(Ruppenhoferetal.,2010;GerberandChai,2012).Successfulfeaturesforresolvingimplicitargumentsincludethedistancebetweenmentionsandanydis-courserelationsoccurringbetweenthem(GerberandChai,2012),rolesassignedtomentionsinthepreviouscontext,thediscourseprominenceofthedenotedentity(SilbererandFrank,2012),anditscenteringstatus(LaparraandRigau,2013).NoneofthesefeatureshavebeenusedinastandardSRLsystemtodate(andtrivially,notallofthemwillbehelpfulas,forexample,thenumberofsentencesbe-tweenapredicateandanargumentisalwayszerowithinasentence).Inthispaper,weextendthecontextualfeaturesusedforresolvingimplicitar-gumentstotheSRLtaskandshowhowasetofdiscourse-levelenhancementscanbeaddedtoatra-ditionalsentence-levelSRLmodel.3FrameNetTheBerkeleyFrameNetproject(Ruppenhoferetal.,2010)developsasemanticlexiconandanannotatedexamplecorpusbasedonFillmore’s(1976)theoryofframesemantics.Annotationsconsistofframe-evokingelements(i.e.,wordsinasentencethatareassociatedwithaconceptualframe)andframeele-ments(i.e.,instantiationsofsemanticroles,whicharedefinedperframeandfilledbywordsorwordsequencesinagivensentence).Forexample,theDELIVERYframedescribesasceneorsituationinwhichaDELIVERERhandsoffaTHEMEtoaRE-CIPIENToraGOAL.1Intotal,thereare1,019framesand8,886frameelementsdefinedinthelat-1Seehttps://framenet2.icsi.berkeley.edu/foracomprehensivelistofframesandtheirdefinitions.
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
e
d
toi
/
t
un
c
je
/
je
un
r
t
je
c
e
–
p
d
F
/
d
o
je
/
.
1
0
1
1
6
2
/
t
je
un
c
_
un
_
0
0
1
5
0
1
5
6
6
7
8
8
/
/
t
je
un
c
_
un
_
0
0
1
5
0
p
d
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
451
estpubliclyavailableversionofFrameNet.2Anav-eragenumberof11.6differentframe-evokingele-mentsareprovidedforeachframe(11,829intotal).FollowingpreviousworkonFrameNet-basedSRL,weusethefulltextannotationdataset,whichcon-tains23,087frameinstances.Semanticannotationsforframeinstancesandfillersofframeelementsaregenerallyprovidedatthelevelofwordsequences,whichcanbesinglewords,completeorincompletephrases,andentireclauses(Ruppenhoferetal.,2010,Chapter4).AninstanceoftheDELIVERYframe,withannotationsoftheframe-evokingelement(underlined)andin-stantiatedframeelements(inbrackets),isgivenintheexamplebelow:(1)TheSovietUnionagreedtospeedup[oil]THEMEdeliveriesDELIVERY[toYugoslavia]RECIPIENT.NotethattheoildeliverieshereconcernYugoslaviaasageopoliticalentityandhencetheRECIPIENTroleisassigned.IfYugoslaviawasreferredtoasthelocationofadelivery,theGOALrolewouldbeassignedinstead.Ingeneral,rolescanberestrictedbyso-calledsemantictypes(e.g.,everyfilleroftheTHEMEelementintheDELIVERYframeneedstobeaphysicalobject).Cependant,notallrolesaretypedandwhetheraspecificphraseisasuitablefillerlargelydependsoncontext.4BaselineModelAsabaselineforimplementingcontextualenhance-mentstoanSRLmodel,weusethesemanticrolelabelingcomponentsprovidedbythemate-tools(Bj¨orkelundetal.,2010).Givenaframe-evokingel-ementinasentenceanditsassociatedframe(i.e.,apredicateanditssense),themate-toolsformapipelineoflogisticregressionclassifiersthatiden-tifyandlabelframeelementswhichareinstantiatedwithinthesamesentence(i.e.,agivenpredicate’sarguments).TheadoptedSRLsystemhasbeendevelopedforPropBank/NomBank-stylerolelabelingandwemakeseveralchangestoadaptittoFrameNet.Specifically,wechangetheargumentlabelingpro-cedurefrompredicate-specifictoframe-specific2Version1.5,releasedSeptember2010.rolesandimplementI/Omethodstoreadandgen-erateFrameNetXMLfiles.Fordirectcompari-sonwiththepreviousstate-of-the-artforFrameNet-basedSRL,wefurtherimplementadditionalfea-turesusedintheSEMAFORsystem(Dasetal.,2014)andcombinetherolelabelingcompo-nentsofmate-toolswithSEMAFOR’spreprocess-ingtoolchain.3AllfeaturesusedinoursystemarelistedinTable1.Themaindifferencesbetweenouradaptationofmate-toolsandSEMAFORareasfollows:whereasthelatterimplementsidentificationandlabelingofrolefillersinonestep,mate-toolsfollowthein-sightthatthesetwostepsareconceptuallydiffer-ent(XueandPalmer,2004)andshouldbemodeledseparately.Accordingly,mate-toolscontainaglobalrerankingcomponentwhichtakesintoaccountiden-tificationandlabelingdecisionswhileSEMAFORonlyusesrerankingtechniquestofilteroverlappingargumentpredictionsandotherconstraints(seeDasetal.,2014fordetails).WediscusstheadvantageofaglobalrerankerforoursettinginSection5.5ExtensionsbasedonContextContextcanberelevantforsemanticrolelabelinginvariousdifferentways.Inthissection,wemoti-vateanddescribefourextensionsoverpreviousap-proaches.Thefirstextensionisasetoffeaturesthatmodeldocument-specificaspectsofwordmeaningusingdistributionalsemantics.Themotivationforthisfea-tureclassstemsfromtheinsightthatthemeaningofawordincontextcaninfluencecorrectroleassign-ment.Whileconceptssuchaspolysemy,homonymyandmetonymyareallrelevanthere,thescarcetrain-ingdataavailableforFrameNet-basedSRLcallsforalight-weightmodelthatcanbeappliedwithoutlargeamountsoflabeleddata.Wethereforeemploydistributionalwordrepresentationswhichwecriti-callyadaptbasedondocumentcontent.WedescribeourcontributioninSection5.1.Entitiesthatfillsemanticrolesaresometimesmentionedindiscourse.Givenaspecificmention3WenotethatbetterresultshavebeenreportedinHermannetal.(2014)andT¨ackstr¨ometal.(2015).Cependant,bothofthesemorerecentapproachesrelyonacustomframeidentifi-cationcomponentaswellasproprietarytoolsandmodelsfortaggingandparsingwhicharenotpubliclyavailable.
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
e
d
toi
/
t
un
c
je
/
je
un
r
t
je
c
e
–
p
d
F
/
d
o
je
/
.
1
0
1
1
6
2
/
t
je
un
c
_
un
_
0
0
1
5
0
1
5
6
6
7
8
8
/
/
t
je
un
c
_
un
_
0
0
1
5
0
p
d
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
452
ArgumentidentificationandclassificationLemmaformoffPOStagoffAnysyntacticdependentsoff*Subcatframeoff*Voiceofa*Anylemmaina*NumberofwordsinaFirstwordandPOStaginaSecondwordandPOStaginaLastwordandPOStaginaRelationfromfirstwordinatoitsparentRelationfromsecondwordinatoitsparentRelationfromlastwordinatoitsparentRelativepositionofawithrespecttopVoiceofaandrelativepositionwithrespecttop*IdentificationonlyLemmaformofthefirstwordinaLemmaformofthesyntacticheadofaLemmaformofthelastwordinaPOStagofthefirstwordinaPOStagofthesyntacticheadofaPOStagofthelastwordinaRelationfromsyntacticheadofatoitsparentDependencypathfromatofLengthofdependencypathfromatofNumberofwordsbetweenaandfTable1:FeaturesfromDasetal.(2014)whichweadoptinourmodel;adenotestheargumentspanundercon-sideration,freferstothecorrespondingframeevokingelement.Identificationfeaturesareinstantiatedasbinaryindicatorfeatures.Featuresmarkedwithanasteriskarerolespecific.Allotherfeaturesapplytocombinationsofroleandframe.forwhicharoleistobepredicted,wecanalsodi-rectlyusepreviousroleassignmentsasclassificationcues.WedescribeourimplementationofthisfeatureinSection5.2.Thefillerofasemanticroleisoftenawordorphrasewhichoccursonlyonceorafewtimesinadocument.Ifneithersyntaxnoraspectsoflexi-calmeaningprovidecuesindicatingauniquerole,usefulinformationcanstillbederivedfromthedis-coursesalienceofthedenotedentity.Ourmodelmakesuseofasimplesalienceindicatorthatcanbereliablyderivedfromautomaticallycomputedcoref-erencechains.Wedescribethemotivationandac-tualimplementationofthisfeatureinSection5.3.Theaforementionedfeatureswillinfluencerolelabelingdecisionsdirectly,cependant,furtherim-provementscanbegainedbyconsideringinterac-tionsbetweenlabelingdecisions.AsdiscussedinDasetal.(2014),roleannotationsinFrameNetareuniquewithrespecttoaframeinstanceinmorethan96%ofcases.Thismeansthatevenifafeatureisnotapositiveindicatorforacandidaterolefiller,knowingitwouldbeabettercueforanothercan-didatecanalsopreventahypotheticalmodelfromassigningaframeelementlabelincorrectly.Whilethiskindofknowledgehasbeensuccessfullyim-plementedasconstraintsinrecentFrameNet-basedSRLmodels(Hermannetal.,2014;T¨ackstr¨ometal.,2015),earlierworkonPropBank-basedrolelabel-ingsuggeststhatbetterperformancecanbeachievedwithare-rankingcomponentwhichhasthepoten-tialtolearnsuchconstraintsandotherinteractionsimplicitly(Toutanovaetal.,2005;Bj¨orkelundetal.,2010).Inourmodel,weadoptthelattermethodandextenditwithadditionalframe-basedfeatures.WedescribethisapproachinmoredetailinSection5.4.5.1ModelingWordMeaninginContextTheunderlyingideaofdistributionalmodelsofse-manticsisthatmeaningcanbeacquiredbasedondistributionalproperties(typicallyrepresentedbyco-occurrencecounts)oflinguisticentitiessuchaswordsandphrases(Sahlgren,2008).Althoughtheabsolutemeaningofdistributionalrepresentationsremainsunclear,theyhaveprovenhighlysuccess-fulformodelingrelativeaspectsofmeaning,asre-quiredforinstanceinwordsimilaritytasks(Mikolovetal.,2013;Penningtonetal.,2014).Giventheirabilitytomodellexicalsimilarity,itisnotsurpris-ingthatsuchrepresentationsarealsosuccessfulatrepresentingsimilarwordsinsemantictasksrelatedtorolelabeling(Pennacchiottietal.,2008;Croceetal.,2010;Zapirainetal.,2013).Althoughdistributionalrepresentationscanbeuseddirectlyasfeaturesforrolelabeling(Pad´oetal.,2008;Gorinskietal.,2013;RothandWood-send,2014,interalia),furthergainsshouldbepossi-blewhenconsideringdocument-specificpropertiessuchasgenreandcontext.ThisisparticularlytrueinthecontextofFrameNet,wheredifferentsensesareobservedacrossadiverserangeoftextsinclud-ingspokendialogueanddebatetranscriptsaswell
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
e
d
toi
/
t
un
c
je
/
je
un
r
t
je
c
e
–
p
d
F
/
d
o
je
/
.
1
0
1
1
6
2
/
t
je
un
c
_
un
_
0
0
1
5
0
1
5
6
6
7
8
8
/
/
t
je
un
c
_
un
_
0
0
1
5
0
p
d
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
453
CountryFrameFrameElementIranSupplyRECIPIENTCommercebuyBUYERChinaSupplySUPPLIERCommercesellSELLERIraqLocativerelationGROUNDArrivingGOALTable2:MostfrequentrolesassignedtocountrynamesappearingFrameNettexts:whereasIranandChinaaremostlymentionedinaneconomiccontext,referencestoIraqaremainlyfoundinanewsarticleaboutapolitician’svisittothecountry.astravelguidesandnewspaperarticles.Countrynames,forexample,canbeobservedasfillersfordifferentrolesdependingonthetextgenreanditsperspective.Whereassometextmaytalkaboutacountryasaninterestingholidaydestination(e.g.,“BerlitzIntrotoJamaica”),othersmaydiscusswhatacountryisgoodatorinterestedin(e.g.,“Iran[Nu-clear]Introduction”).Alistofthemostfrequentrolesassignedtodifferentcountrynamesaredis-playedinTable2.Previousapproachesmodelwordmeaningincon-text(Thateretal.,2010;DinuandLapata,2010,in-teralia)usingsentence-levelinformationwhichisalreadyavailableintraditionalSRLsystemsintheformofexplicitfeatures.Here,wegoonestepfur-theranddefineasimplemodelinwhichwordmean-ingrepresentationsareadaptedtoeachdocument.Asastartingpoint,weusetheGloVetoolkit(Pen-ningtonetal.,2014)forlearningrepresentations4andapplyittotheWikipediacorpusmadeavailablebytheWestburyLab.5Thelearnedrepresentationscanbeseenaswordvectorswhosecomponentsen-codebasicbitsofrelatedencyclopaedicknowledge.Weadaptthesegeneralrepresentationstotheac-tualmeaningofawordinaparticulartextbyrun-ningadditionaliterationsoftheGloVetoolkitus-ingdocument-specificco-occurrencesasinputandWikipedia-basedrepresentationsforinitialization.4Weselectedthistoolkitinourworkduetoitsflexibility:asitdirectlyoperatesoverco-occurrencematrices,wecanmanip-ulatecountspriortowordvectorcomputationandeasilytakeintoaccountmultiplematrices.5http://www.psych.ualberta.ca/˜westburylab/downloads/westburylab.wikicorp.download.htmlTomakeupforthelargedifferenceindatasizebe-tweentheWikipediacorpusandasingledocument,wenormalizeco-occurrencecountsbasedonthera-tiobetweentheabsolutenumbersofco-occurrencesinbothresources.Givenco-occurrencematricesCwikiandCd,andthevocabularyV,weformallydefinethefeaturesofourSRLmodelasthecomponentsofthevec-torspace~wiofwordswi(1≤i≤|V|)occurringindocumentd.TherepresentationsarelearnedbyapplyingGloVetooptimizethefollowingobjectiveforniterations(1≤t≤n):Jt=Xi,jf(Xij)(~wTi~wj−logXij)2,(2)whereX=(Cwikiift
je
D
o
w
n
o
un
d
e
d
F
r
o
m
h
t
t
p
:
/
/
d
je
r
e
c
t
.
m
je
t
.
e
d
toi
/
t
un
c
je
/
je
un
r
t
je
c
e
–
p
d
F
/
d
o
je
/
.
1
0
1
1
6
2
/
t
je
un
c
_
un
_
0
0
1
5
0
1
5
6
6
7
8
8
/
/
t
je
un
c
_
un
_
0
0
1
5
0
p
d
.
F
b
oui
g
toi
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
455
FramesSRLmodelPRF1goldSEMAFOR778.473.175.7∗goldFramat80.371.775.8∗goldFramat+context80.473.076.5SEMAFORSEMAFOR69.265.167.1∗SEMAFORFramat71.163.767.2∗SEMAFORFramat+context71.164.867.8Table4:Fullstructurepredictionresultsusinggold(top)andpredictedframes(bottom).Allnumbersareper-centages.∗Significantlydifferent(p<0.05)fromFramat+context.Attesttime,thererankertakesasinputthen-bestla-belsforthem-bestfillersofaframestructure,com-putesaglobalscoreforeachofthen×mpossiblecombinationsandreturnsthestructurewiththehigh-estoverallscoreasitspredictionoutput.Basedoninitialexperimentsonourtrainingdata,wesettheseparameterstom=8andn=4.6ExperimentsInthissection,wedemonstratetheusefulnessofcontextualfeaturesforFrameNet-basedSRLmod-els.Ourhypothesisisthatcontextualinformationcanconsiderablyimproveanexistingsemanticrolelabelingsystem.Accordingly,wetestthishypothe-sisbasedontheoutputofthreedifferentsystems.Thefirstsystem,henceforthcalledFramat(shortforFrameNet-adaptedmate-tools)isthebaselinesystemdescribedinSection4.Thesecondsys-tem,henceforthFramat+context,isanenhancedver-sionofthebaselinethatadditionallyusesallexten-sionsdescribedinSection5.Finally,wealsocon-sidertheoutputofSEMAFOR(Dasetal.,2014),astate-of-the-artmodelforframe-semanticrolelabel-ing.Althoughallsystemsareprovidedwithentiredocumentsasinput,SEMAFORandFramatpro-cesseachdocumentsentence-by-sentencewhereasFramat+contextalsousesfeaturesoverallsentences.Forevaluation,weusethesameFrameNettrain-ingandevaluationtextsasestablishedinDasandSmith(2011).Wecomputeprecision,recallandF1-scoreusingthemodifiedSemEval-2007scorerfromtheSEMAFORwebsite.66http://www.ark.cs.cmu.edu/SEMAFOR/eval/7ResultsproducedbyrunningSEMAFORontheexactsameModel/addedfeaturePRF1Framatw/oreranker77.572.574.9+discoursenewness77.672.374.9+wordmeaningvectors77.972.775.2+cooccurringroles77.972.875.3+reranker80.672.776.4+framestructure80.473.076.5Table5:Fullstructurepredictionresultsusinggoldframes,Framatanddifferentsetsofcontextfeatures.Allnumbersarepercentages.ResultsTable4summarizesourresultswithFra-mat,Framat+context,andSEMAFORusinggoldandpredictedframes(seetheupperandlowerhalfofthetable,respectively).Althoughdifferencesinsystemarchitectureleadtodifferentprecision/recalltrade-offsforFramatandSEMAFOR,bothsys-temsachievecomparableF1(forbothgoldandpre-dictedframes).ComparedtoFramat,wecanseethatthecontextualenhancementsimplementedinourFramat+contextmodelleadtoimmediategainsof1.3pointsinrecall,correspondingtoasignifi-cantincreaseof0.7pointsinF1.Framat+context’sre-callisslightlybelowthatofSEMAFOR(73.0%vs.73.1%),however,itachievesamuchhigherlevelofprecision(80.4%vs.78.4%).Weexaminedwhetherdifferencesinperformanceamongthethreesystemsaresignificantusinganap-proximaterandomizationtestoversentences(Yeh,2000).SEMAFORandFramatperformsignifi-cantlyworse(p<0.05)comparedtoFramat+contextbothwhengoldandpredictedframesareused.Intheremainderofthissectionwediscussresultsbasedongoldframes,sincethefocusofthisworkliesprimar-ilyontherolelabelingtask.ImpactofIndividualFeaturesWedemonstratetheeffectofaddingindividualcontext-basedfea-turestotheFramatmodelinaseparateexperiment.Whereasallmodelsinthepreviousexperimentusedarerankerfordirectcomparability,herewestartwiththeFramatbaseline(withoutareranker)andaddeachenhancementdescribedinSection5in-crementally.AssummarizedinTable5,thebase-linewithoutarerankerachievesaprecisionandframeinstancesfortrainingandtestingasourownmodels.
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
t
a
c
l
/
l
a
r
t
i
c
e
-
p
d
f
/
d
o
i
/
.
1
0
1
1
6
2
/
t
l
a
c
_
a
_
0
0
1
5
0
1
5
6
6
7
8
8
/
/
t
l
a
c
_
a
_
0
0
1
5
0
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
456
recallof77.5%and72.5%,respectively.Addi-tionofourdiscoursenewfeatureincreasespre-cision(+0.1%),butalsoreducesrecall(−0.2%).Addingwordmeaningvectorscompensatesforthelossinrecall(+0.4%)andfurtherincreasespreci-sion(+0.3%).Informationaboutroleassignmentstocoreferringmentionsincreasesrecall(+0.1%)whileretainingthesamelevelofprecision.Finally,wecanseethatjointlyconsideringrolelabelingdecisionsinaglobalrerankerwithadditionalfea-turesonframestructureleadstothestrongestboostinperformance,withcombinedadditionalgainsinprecisionandrecallof+2.5%and+0.2%,respec-tively.Interestingly,thegainsrealizedherearemuchhighercomparedtowhenaddingthererankertotheFramatmodelwithoutcontextualfeatures,whichcorrespondstoa+2.8%increaseinprecisionbuta−0.8%reductioninrecall.Generalvs.Document-specificVectorsWealsoassessedtheimpactofadaptingvectorstodocu-ments(seeTable6).Specifically,wecomparedaversionoftheFramat+contextmodelwithoutanyvectorsagainstamodelusingtheadaptationtech-niquepresentedinSection5.1andasimpleralterna-tivewhichobtainsGloVerepresentationstrainedontheWikipediacorpusandFrameNettexts.Thelat-termodeldoesnotexplicitlytakedocumentinfor-mationintoaccount,butitshouldbeabletoyieldvectorsrepresentativeoftheFrameNetdomains,merelybybeingtrainedonthem.AsshowninTa-ble6,ouradaptationtechniqueissuperiortolearn-ingwordrepresentationsbasedonWikipediaandallFrameNettextsatonce.Usingthecomponentsofdocument-specificvectorsasfeaturesimprovesprecisionandrecallby+0.7percentagepointsoverFramat+contextwithoutvectors.WordrepresentationstrainedonWikipediaandFrameNetimprovepreci-sionby+0.2percentagepointsandrecallby+0.6.QualitativeImprovementsInadditiontoquanti-tativegains,wealsoobservequalitativeimprove-mentswhenconsideringcontextualfeatures.AsetofexamplepredictionsbydifferentmodelsarelistedinTable7.TheannotationsshowthatFramatandSEMAFORmislabelseveralcasesthatarecorrectlyclassifiedbyFramat+context.Inthefirstexample,onlyFramat+contextisabletopredictthatonDec.1fillstheframeelementModel/wordrepresentationsPRF1Framat+contextwithoutvectors79.772.275.8+document-specificvectors80.473.076.5+general(Wiki+FN)vectors79.972.876.2Table6:Fullstructurepredictionresultsusinggoldframes,Framat+contextanddifferentvectorrepresenta-tions.Allnumbersarepercentages.TIME.ThismayseemtrivialatfirstglancebutisactuallyremarkableasthewordtokenDecneitheroccursinthetrainingdatanoriswellrepresentedasatimeexpressioninWikipedia.Theonlywaythemodelisabletolabelthisphrasecorrectlyisbyfindingthatcorrespondingwordtokensaresim-ilarlydistributedacrossthetestdocumentasothertimeexpressionsareinthetrainingdata.Inthesecondandthirdexamples,correctassignmentsre-quiresomeformofworldknowledgewhichisnotexpressedwithintherespectivesentencesbutmightbeapproximatedbasedoncontext.Forexample,knowingthataunt,uncleandgrandmotherarerolefillersofaKINSHIPframemeansthattheyareofthesemantictypehumanandthusonlycompatiblewiththeframeelementRECIPIENT,notwithGOAL.Similarly,correctlyclassifyingtherelationbetweenClintonandstoogeinthelastexampleisonlypossi-bleifthemodelhasaccesstosomeinformationthatmakesClintonalikelyfilleroftheSUPERIORrole.Weconjecturethatdocument-specificwordvectorrepresentationsprovidesuchinformationgiventhatClintonco-occursinthedocumentwithwordssuchaspresident,chief,andclaim.Overall,wefindthatthefeaturesintroducedinSection5modelafairamountofcontextualin-formationwhichcanhelpasemanticrolelabelingmodeltoperformbetterdecisions.7DiscussionInthissection,wediscusstheextenttowhichourmodelleveragesthefullpotentialofcontextualfea-turesforsemanticrolelabeling.Wemanuallyex-amineroleassignmentstoframeelementswhichseemparticularlysensitivetocontext.WeanalyzesuchframeelementsbasedondifferencesinlabelassignmentbetweenFramatandFramat+contextthatcanbetracedbacktofactorssuchasagencyindis-
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
t
a
c
l
/
l
a
r
t
i
c
e
-
p
d
f
/
d
o
i
/
.
1
0
1
1
6
2
/
t
l
a
c
_
a
_
0
0
1
5
0
1
5
6
6
7
8
8
/
/
t
l
a
c
_
a
_
0
0
1
5
0
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
457
SEMAFOR*Can[he]THEMEgoMOTION[toParis]GOALonDec.1?Framat*Can[he]THEMEgoMOTION[toParisonDec.1]GOAL?Framat+contextCan[he]THEMEgoMOTION[toParis]GOAL[onDec.1]TIME?SEMAFOR*SendSENDING[myregards]THEMEtomyaunt,uncleandgrandmother.Framat*SendSENDING[myregards]THEME[tomyaunt,uncleandgrandmother]GOAL.Framat+contextSendSENDING[myregards]THEME[tomyaunt,uncleandgrandmother]RECIPIENT.SEMAFOR*Stephanopoulosdoesn’twanttoseemaClintonstoogeSUBORDINATESANDSUPERIORSFramat*Stephanopoulosdoesn’twanttoseema[Clinton]DESCRIPTORstoogeSUBORDINATESANDSUPERIORSFramat+contextStephanopoulosdoesn’twanttoseema[Clinton]SUPERIORstoogeSUBORDINATESANDSUPERIORSTable7:Examplesofframestructuresthatarelabeledincorrectly(markedbyasterisks)withoutcontextualfeatures.courseandwordsenseincontext.Weinvestigatewhetherourmodelcapturesthesefactorssuccess-fullyandshowcaseexampleswhilereportingabso-lutechangesinprecisionandrecall.7.1AgencyandDiscourseManyframeelementsinFrameNetindicateagency,apropertythatweexpecttohighlycorrelatewithcontextualfeaturesonsemantictypesofassignedroles(seeSection5.2)anddiscoursesalience(seeSection5.3).Analysisofsystemoutputrevealedthatsuchfeaturesindeedaffectandgenerallyim-proverolelabeling.ConsideringallAGENTele-mentsacrossframes,weobserveabsoluteimprove-mentsof4%inprecisionand3%inrecall.Inthefol-lowing,weprovideamoredetailedanalysisoftwospecificframeelements:thelowfrequentAGENTelementofthePROJECTframeandthehighlyfre-quentSPEAKERelementintheSTATEMENTframe.TheAGENTofaPROJECTisdefinedasthe“individualororganizationthatcarriesoutthePROJECT”.Themaindifficultyinidentifyingin-stancesofthisframeelementisthattheframe-evokingtargetwordistypicallyanounsuchasproject,plan,orprogramandhencesyntacticfea-turesonword-worddependenciesdonotprovidesufficientcues.Wefoundseveralcaseswherecon-textprovidedmissingcues,leadingtoanincreaseinrecallfrom56%to78%.Incaseswhereaddi-tionalfeaturesdidnothelp,weidentifiedtwotypesoferrors:firstly,thefillerwastoofarfromthetar-getwordandthereforecouldnotbeidentifiedasafilleratall(“[NorthKorea]AGENTisdeveloping...programPROJECT”),andsecondly,earliermen-tionsindicatingagencywerenotdetectedbythecoreferenceresolutionsystem(“TheIAEAassistedSyria(...)ThisstudywaspartofanIAEAAGENT..programPROJECT).TheSPEAKERofaSTATEMENTisdefinedas“thesentiententitythatproduces[a]MESSAGE”.InstancesoftheSTATEMENTframearefrequentlyevokedbyverbssuchassay,mention,andclaim.TheSPEAKERrolecanbehardtoidentifyinsub-jectpositionasanunknownentitycouldalsofilltheMEDIUMrole.Forexample,“areportclaimsthat...”shouldbeanalyzeddifferentlyfrom“apersonclaims”.Ourcontextualfeaturesimproverolelabel-ingincaseswherethesubjectcanbeclassifiedbasedonpreviousroleassignments.Onthenegativeside,wefoundourmodeltobetooconservativeinsomecaseswhereasubjectisdiscoursenew.AdditionalgainswouldbepossiblewithimprovedcoreferencechainsthatincludepronounssuchassomeandI.Suchchainscouldbeestablishedthroughabetterpreprocessingpipelineorbyutilizingadditionallin-guisticresources.7.2WordMeaningandContextAsdiscussedearlier,weexpectthatthemeaningofawordincontextprovidesvaluablecuesregardingpotentialframeelements.Twotypesofwordsareofparticularinteresthere:ambiguouswords,forwhichdifferentsensesmightapplydependingoncontext,andout-of-vocabularywords,forwhichnoclearsensecouldbeestablishedduringtraining.Inthefollowing,wetakeacloserlookatdifferencesinroleassignmentbetweenFramatandFramat+contextforsuchfillers.Ambiguouswordsthatoccurasfillersofdiffer-entframeelementsinthetestsetincludeparty,
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
t
a
c
l
/
l
a
r
t
i
c
e
-
p
d
f
/
d
o
i
/
.
1
0
1
1
6
2
/
t
l
a
c
_
a
_
0
0
1
5
0
1
5
6
6
7
8
8
/
/
t
l
a
c
_
a
_
0
0
1
5
0
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
458
power,program,andview.Wefindoccurrencesofthesewordsintwobroadtypesofcontexts:po-liticalandnon-political.Withinpoliticalcontexts,partyandpowerfillframeelementssuchasPOS-SESSIONandLEADER.Outwithpoliticalcontexts,wefindframeelementssuchasELECTRICITYandSOCIALEVENTtobefarmorelikely.TheFramatmodelexhibitsageneralbiastowardsthepoliticaldomain,oftenmissinginstancesofframeelementsthataremorecommoninnon-politicalcontexts(e.g.,“thesix-[party]INTERLOCUTORStalksDISCUSSION”).Framat+context,incontrast,showslessofabiasandprovidesbetterclassificationbasedoncontextfea-turesforallframeelements.Overall,precisionforthefourambiguouswordsisimprovedfrom86%to93%,withafewerrorsremainingduetoraredepen-dencypaths(e.g.,[program]ACTNMOD←−−−whichSBAR←−−isPRD←−−violationCOMPLIANCE)anddifferencesbetweenframeelementsthatdependonfactorssuchasnum-ber(COGNIZERvs.COGNIZER1).AfrequentlyobservederrorbythebaselinemodelistoassignperipheralframeelementssuchasTIMEtorolefillersthatactuallyarenottimeexpressions.ThishappensbecausewordswhichhavenotbeenseenfrequentlyduringtrainingbutappearinadverbialpositionsaregenerallylikelytofilltheframeelementTIME.Wefindthattheuseofdocument-specificwordvectorrepresentationsdrasticallyreducesthenumberofsucherrors(e.g.,“togiveGIVING[generously]MANNERvs.*TIME”),withabsolutegainsinprecisionandrecallof14%and9%,respectively,presumablybecausenon-timeexpressionsareoftendistributeddifferentlyacrossadocumentthantimeexpressions.Document-specificwordvectorrepresentationsalsoimproverecallforout-of-vocabularywords,asseenwiththeexampleofDecdiscussedinSection6.However,suchrepresentationsbythemselvesmightbeinsuf-ficienttodeterminewhichaspectsofawordsenseareapplicableacrossadocumentasoccurrencesinspecificcontextsmayalsobemisleading(e.g.,“...changes[throughoutthecommunity]”vs.“...[throughouttheages]TIME”).Someofthesecasescouldberesolvedusinghigherlevelfeaturesthatexplicitlymodelinteractionsbetween(predicted)wordmeaningincontextandotherfactors,howeverweleavethistofuturework.8ConclusionsInthispaper,weenrichedatraditionalsemanticrolelabelingmodelwithadditionalinformationfromcontext.Thecorrespondingfeatureswedefinedcanbegroupedintothreecategories:(1)discourse-levelfeaturesthatdirectlyutilizediscourseknowledgeintheformofcoreferencechains(newness,priorroleassignments),(2)sentence-levelfeaturesthatmodelpropertiesofaframestructureasawhole,and(3)lexicalfeaturesthatcanbecomputedusingmethodsfromdistributionalsemanticsandanadaptationtomodeldocument-specificwordmeaning.Toimplementourdiscourse-levelenhancements,wemodifiedasemanticrolelabelingsystemde-velopedforPropBank/NomBankwhichwefoundtoachievecompetitiveperformanceonFrameNet-basedannotations.Ourmaincontributionliesinextendingthissystemtothediscourselevel.Ourexperimentsrevealedthatdiscourseawarefeaturescansignificantlyimprovesemanticrolelabelingper-formance,leadingtogainsofover+2.0percent-agepointsinprecisionandstate-of-the-artresultsintermsofF1.Analysisofsystemoutputrevealedtworeasonsforimprovement.Firstly,contextualfeaturesprovidenecessaryadditionalinformationtounderstandandassignrolesonthesentencelevel,andsecondly,someofourdiscourse-levelfeaturesgeneralizebetterthantraditionallexicalandsyntac-ticfeatures.Wefurtherfoundthatadditionalgainscanbeachievedusingimprovedpreprocessingtoolsandamoresophisticatedmodelforfeatureinter-actions.Inthefuture,weareplanningtoassesswhetherdiscourse-levelfeaturesgeneralizecross-linguistically.Wewouldalsoliketoinvestigatewhethersemanticrolelabelingcanbenefitfromrec-ognizingtextualentailmentandhigh-leveldiscourserelations.Ourcodeispubliclyavailableunderhttp://github.com/microth/mateplus.AcknowledgementsWearegratefultoDianaMcCarthyandthreeanony-mousrefereeswhosefeedbackhelpedtosubstan-tiallyimprovethepresentpaper.Theresearchpre-sentedinthispaperwasfundedbyaDFGResearchFellowship(RO4848/1-1).
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
t
a
c
l
/
l
a
r
t
i
c
e
-
p
d
f
/
d
o
i
/
.
1
0
1
1
6
2
/
t
l
a
c
_
a
_
0
0
1
5
0
1
5
6
6
7
8
8
/
/
t
l
a
c
_
a
_
0
0
1
5
0
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
459
ReferencesApoorvAgarwal,SriramkumarBalasubramanian,AnupKotalwar,JiehanZheng,andOwenRambow.2014.Framesemantictreekernelsforsocialnetworkextrac-tionfromtext.InProceedingsofthe14thConfer-enceoftheEuropeanChapteroftheAssociationforComputationalLinguistics,pages211–219,Gothen-burg,Sweden,26–30April2014.AndersBj¨orkelund,BerndBohnet,LoveHafdell,andPierreNugues.2010.Ahigh-performancesyntac-ticandsemanticdependencyparser.InColing2010:DemonstrationVolume,pages33–36,Beijing,China.WanxiangChe,TingLiu,andYongqiangLi.2010.Im-provingsemanticrolelabelingwithwordsense.InHumanLanguageTechnologies:The2010AnnualConferenceoftheNorthAmericanChapteroftheAs-sociationforComputationalLinguistics,pages246–249,LosAngeles,California,1–6June2010.BobCoyne,AlexKlapheke,MasoudRouhizadeh,RichardSproat,andDanielBauer.2012.Annotationtoolsandknowledgerepresentationforatext-to-scenesystem.InProceedingsof24thInternationalCon-ferenceonComputationalLinguistics,pages679–694,Mumbai,India,8–15December2012.DaniloCroce,CristinaGiannone,PaoloAnnesi,andRobertoBasili.2010.Towardsopen-domainsemanticrolelabeling.InProceedingsofthe48thAnnualMeet-ingoftheAssociationforComputationalLinguistics,pages237–246,Uppsala,Sweden,11–16July2010.DaniloCroce,AlessandroMoschitti,andRobertoBasili.2011.Structuredlexicalsimilarityviaconvolutionkernelsondependencytrees.InProceedingsofthe2011ConferenceonEmpiricalMethodsinNaturalLanguageProcessing,pages1034–1046,Edinburgh,UnitedKingdom.DipanjanDasandNoahA.Smith.2011.Semi-supervisedframe-semanticparsingforunknownpred-icates.InProceedingsofthe49thAnnualMeetingoftheAssociationforComputationalLinguistics:Hu-manLanguageTechnologies,Portland,Oregon,19–24June2011.DipanjanDas,DesaiChen,Andr´eF.T.Martins,NathanSchneider,andNoahA.Smith.2014.Frame-SemanticParsing.ComputationalLinguistics,40(1):9–56.KoenDeschachtandMarie-FrancineMoens.2009.Semi-supervisedsemanticrolelabelingusingtheLa-tentWordsLanguageModel.InProceedingsofthe2009ConferenceonEmpiricalMethodsinNaturalLanguageProcessing,pages21–29,Singapore,2–7August2009.GeorgianaDinuandMirellaLapata.2010.Measuringdistributionalsimilarityincontext.InProceedingsofthe2010ConferenceonEmpiricalMethodsinNaturalLanguageProcessing,pages1162–1172,Cambridge,Massachusetts,9–11October2010.CharlesJ.Fillmore.1976.Framesemanticsandthena-tureoflanguage.InAnnalsoftheNewYorkAcademyofSciences:ConferenceontheOriginandDevelop-mentofLanguageandSpeech,volume280,pages20–32.WilliamFolandandJamesMartin.2015.Dependency-basedsemanticrolelabelingusingconvolutionalneu-ralnetworks.InProceedingsoftheFourthJointConferenceonLexicalandComputationalSemantics,pages279–288,Denver,Colorado.MatthewGerberandJoyceChai.2012.SemanticRoleLabelingofImplicitArgumentsforNominalPredi-cates.ComputationalLinguistics,38(4):755–798.DanielGildeaandDanielJurafsky.2002.Automaticla-belingofsemanticroles.ComputationalLinguistics,28(3):245–288.PhilipGorinski,JosefRuppenhofer,andCarolineSporleder.2013.Towardsweaklysupervisedresolu-tionofnullinstantiations.InProceedingsofthe10thInternationalConferenceonComputationalSemantics(IWCS2013)–LongPapers,pages119–130,Potsdam,Germany,19–22March2013.KarlMoritzHermann,DipanjanDas,JasonWeston,andKuzmanGanchev.2014.Semanticframeidentifica-tionwithdistributedwordrepresentations.InPro-ceedingsofthe52ndAnnualMeetingoftheAssocia-tionforComputationalLinguistics,pages1448–1458,Baltimore,Maryland,23–25June2014.FeiHuangandAlexanderYates.2010.Open-domainsemanticrolelabelingbymodelingwordspans.InProceedingsofthe48thAnnualMeetingoftheAssoci-ationforComputationalLinguistics,pages968–978,Uppsala,Sweden,11–16July2010.RichardJohanssonandPierreNugues.2008.Theef-fectofsyntacticrepresentationonsemanticrolelabel-ing.InProceedingsofthe22ndInternationalCon-ferenceonComputationalLinguistics,pages393–400,Manchester,UnitedKingdom,18–22August2008.EgoitzLaparraandGermanRigau.2013.Sourcesofev-idenceforimplicitargumentresolution.InProceed-ingsofthe10thInternationalConferenceonCompu-tationalSemantics(IWCS2013)–LongPapers,pages155–166,Potsdam,Germany,19–22March2013.HeeyoungLee,AngelChang,YvesPeirsman,NathanaelChambers,MihaiSurdeanu,andDanJurafsky.2013.Deterministiccoreferenceresolutionbasedonentity-centric,precision-rankedrules.ComputationalLin-guistics,39(4):885–916.TaoLei,YuanZhang,Llu´ısM`arquez,AlessandroMos-chitti,andReginaBarzilay.2015.High-orderlow-ranktensorsforsemanticrolelabeling.InProceedings
l
D
o
w
n
o
a
d
e
d
f
r
o
m
h
t
t
p
:
/
/
d
i
r
e
c
t
.
m
i
t
.
e
d
u
/
t
a
c
l
/
l
a
r
t
i
c
e
-
p
d
f
/
d
o
i
/
.
1
0
1
1
6
2
/
t
l
a
c
_
a
_
0
0
1
5
0
1
5
6
6
7
8
8
/
/
t
l
a
c
_
a
_
0
0
1
5
0
p
d
.
f
b
y
g
u
e
s
t
t
o
n
0
7
S
e
p
e
m
b
e
r
2
0
2
3
460
ofthe2015ConferenceoftheNorthAmericanChapteroftheAssociationforComputationalLinguistics:Hu-manLanguageTechnologies,pages1150–1160,Den-ver,Colorado.TomasMikolov,Wen-tauYih,andGeoffreyZweig.2013.Linguisticregularitiesincontinuousspacewordrepresentations.InProceedingsofthe2013Confer-enceoftheNorthAmericanChapteroftheAssocia-tionforComputationalLinguistics:HumanLanguageTechnologies,pages746–751,Atlanta,Georgia,9–15June2013.AlessandroMoschitti.2004.Astudyonconvolutionkernelsforshallowstatisticparsing.InProceedingsofthe42ndMeetingoftheAssociationforComputa-tionalLinguistics(ACL’04),MainVolume,pages335–342,Barcelona,Spain.SebastianPad´o,MarcoPennacchiotti,andCarolineSporleder.2008.Semanticroleassignmentforeventnominalisationsbyleveragingverbaldata.InPro-ceedingsofthe22ndInternationalConferenceonComputationalLinguistics(Coling2008),pages665–672,Manchester,UnitedKingdom.MarcoPennacchiotti,DiegoDeCao,RobertoBasili,DaniloCroce,andMichaelRoth.2008.AutomaticinductionofFrameNetlexicalunits.InProceedingsofthe2008ConferenceonEmpiricalMethodsinNat-uralLanguageProcessing,pages457–465,Honolulu,Hawaii,USA,25–27October2008.JeffreyPennington,RichardSocher,andChristopherManning.2014.Glove:Globalvectorsforwordrep-resentation.InProceedingsofthe2014ConferenceonEmpiricalMethodsinNaturalLanguageProcessing,pages1532–1543,Doha,Qatar,25–29October2014.RalphLRose.2011.Jointinformationvalueofsyntacticandsemanticprominenceforsubsequentpronominalreference.Salience:MultidisciplinaryPerspectivesonItsFunctioninDiscourse,227:81–103.MichaelRothandKristianWoodsend.2014.Compo-sitionofwordrepresentationsimprovessemanticrolelabelling.InProceedingsofthe2014ConferenceonEmpiricalMethodsinNaturalLanguageProcessing,pages407–413,Doha,Qatar,25–29October2014.JosefRuppenhofer,MichaelEllsworth,MiriamR.L.Petruck,ChristopherR.Johnson,andJanScheffczyk.2010.FrameNetII:ExtendedTheoryandPractice.Technicalreport,InternationalComputerScienceIn-stitute,14September2010.JosefRuppenhofer,PhilipGorinski,andCarolineSporleder.2011.Insearchofmissingarguments:Alinguisticapproach.InProceedingsoftheInter-nationalConferenceRecentAdvancesinNaturalLan-guageProcessing2011,pages331–338,Hissar,Bul-garia,12–14September2011.MagnusSahlgren.2008.Thedistributionalhypothesis.ItalianJournalofLinguistics,20(1):33–54.DanShenandMirellaLapata.2007.Usingsemanticrolestoimprovequestionanswering.InProceedingsofthe2007JointConferenceonEmpiricalMethodsinNaturalLanguageProcessingandComputationalNaturalLanguageLearning(EMNLP-CoNLL),pages12–21,Prague,CzechRepublic.CarinaSilbererandAnetteFrank.2012.Castingimplicitrolelinkingasananaphoraresolutiontask.InPro-ceedingsoftheFirstJointConferenceonLexicalandComputationalSemantics(*SEM2012),pages1–10,Montr´eal,Canada,7-8June.OscarT¨ackstr¨om,KuzmanGanchev,andDipanjanDas.2015.Efficientinferenceandstructuredlearningforsemanticrolelabeling.TransactionsoftheAssocia-tionforComputationalLinguistics,3:29–41.StefanThater,HagenF¨urstenau,andManfredPinkal.2010.Contextualizingsemanticrepresentationsus-ingsyntacticallyenrichedvectormodels.InProceed-ingsofthe48thAnnualMeetingoftheAssociationforComputationalLinguistics,pages948–957,Uppsala,Sweden,11–16July2010.KristinaToutanova,AriaHaghighi,andChristopherManning.2005.Jointlearningimprovessemanticrolelabeling.InProceedingsofthe43rdAnnualMeet-ingoftheAssociationforComputationalLinguistics,pages589–596,AnnArbor,Michigan,29–30June2005.BoyiXie,RebeccaJ.Passonneau,LeonWu,andGerm´anG.Creamer.2013.Semanticframestopre-dictstockpricemovement.InProceedingsofthe51stAnnualMeetingoftheAssociationforComputationalLinguistics,pages873–883,Sofia,Bulgaria,4–9Au-gust2013.NianwenXueandMarthaPalmer.2004.Calibratingfeaturesforsemanticrolelabeling.InProceedingsofthe2004ConferenceonEmpiricalMethodsinNaturalLanguageProcessing,pages88–94,Barcelona,Spain,July.AlexanderYeh.2000.Moreaccuratetestsforthesta-tisticalsignificanceofresultdifferences.InProceed-ingsofthe18thInternationalConferenceonComputa-tionalLinguistics,pages947–953,Saarbr¨ucken,Ger-many.Be˜natZapirain,EnekoAgirre,Llu´ısM`arquez,andMi-haiSurdeanu.2013.Selectionalpreferencesforse-manticroleclassification.ComputationalLinguistics,39(3):631–663.
Télécharger le PDF