Transactions of the Association for Computational Linguistics, Bd. 5, S. 295–307, 2017. Action Editor: Christopher Potts.

Transactions of the Association for Computational Linguistics, Bd. 5, S. 295–307, 2017. Action Editor: Christopher Potts.
Submission batch: 10/2016; Revision batch: 12/2016; Published 8/2017.

2017 Verein für Computerlinguistik. Distributed under a CC-BY 4.0 Lizenz.

C
(cid:13)

OvercomingLanguageVariationinSentimentAnalysiswithSocialAttentionYiYangandJacobEisensteinSchoolofInteractiveComputingGeorgiaInstituteofTechnologyAtlanta,GA30308{yiyang+jacobe}@gatech.eduAbstractVariationinlanguageisubiquitous,particu-larlyinnewerformsofwritingsuchassocialmedia.Fortunately,variationisnotrandom;itisoftenlinkedtosocialpropertiesoftheau-thor.Inthispaper,weshowhowtoexploitsocialnetworkstomakesentimentanalysismorerobusttosociallanguagevariation.Thekeyideaislinguistichomophily:thetendencyofsociallylinkedindividualstouselanguageinsimilarways.Weformalizethisideainanovelattention-basedneuralnetworkarchitec-ture,inwhichattentionisdividedamongsev-eralbasismodels,dependingontheauthor’spositioninthesocialnetwork.Thishastheeffectofsmoothingtheclassificationfunctionacrossthesocialnetwork,andmakesitpos-sibletoinducepersonalizedclassifiersevenforauthorsforwhomthereisnolabeleddataordemographicmetadata.Thismodelsignif-icantlyimprovestheaccuraciesofsentimentanalysisonTwitterandonreviewdata.1IntroductionWordscanmeandifferentthingstodifferentpeople.Fortunately,thesedifferencesarerarelyidiosyn-cratic,butareoftenlinkedtosocialfactors,suchasage(RosenthalandMcKeown,2011),Geschlecht(Eck-ertandMcConnell-Ginet,2003),Wettrennen(Grün,2002),Erdkunde(Trudgill,1974),andmoreinef-fablecharacteristicssuchaspoliticalandculturalattitudes(Fischer,1958;Labov,1963).Innaturallanguageprocessing(NLP),socialmediadatahasbroughtvariationtothefore,spurringthedevelop-mentofnewcomputationaltechniquesforcharac-terizingvariationinthelexicon(Eisensteinetal.,2010),orthography(Eisenstein,2015),andsyn-tax(Blodgettetal.,2016).Jedoch,asidefromthefocusedtaskofspellingnormalization(Sproatetal.,2001;Awetal.,2006),therehavebeenfewattemptstomakeNLPsystemsmorerobusttolanguagevari-ationacrossspeakersorwriters.OneexceptionistheworkofHovy(2015),whoshowsthattheaccuraciesofsentimentanalysisandtopicclassificationcanbeimprovedbytheinclusionofcoarse-grainedauthordemographicssuchasageandgender.However,suchdemographicinforma-tionisnotdirectlyavailableinmostdatasets,anditisnotyetclearwhetherpredictedageandgen-derofferanyimprovements.Ontheotherendofthespectrumareattemptstocreatepersonalizedlan-guagetechnologies,asareoftenemployedininfor-mationretrieval(Shenetal.,2005),recommendersystems(BasilicoandHofmann,2004),andlan-guagemodeling(Federico,1996).Butpersonal-izationrequiresannotateddataforeachindividualuser—somethingthatmaybepossibleininteractivesettingssuchasinformationretrieval,butisnottyp-icallyfeasibleinnaturallanguageprocessing.Weproposeamiddlegroundbetweengroup-leveldemographiccharacteristicsandpersonalization,byexploitingsocialnetworkstructure.Thesociologi-caltheoryofhomophilyassertsthatindividualsareusuallysimilartotheirfriends(McPhersonetal.,2001).Thispropertyhasbeendemonstratedforlan-guage(Brydenetal.,2013)aswellasforthedemo-graphicpropertiestargetedbyHovy(2015),whicharemorelikelytobesharedbyfriendsthanbyran-dompairsofindividuals(Thelwall,2009).Sozial

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
0
6
2
1
5
6
7
4
5
8

/

/
T

l

A
C
_
A
_
0
0
0
6
2
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
9
S
e
P
e
M
B
e
R
2
0
2
3

296

Figure1:Wordssuchas‘sick’canexpressoppositesen-timentpolaritiesdependingontheauthor.Weaccountforthisvariationbygeneralizingacrossthesocialnetwork.networkinformationisavailableinawiderangeofcontexts,fromsocialmedia(Hubermanetal.,2008)topoliticalspeech(Thomasetal.,2006)tohistori-caltexts(Winterer,2012).Daher,socialnetworkho-mophilyhasthepotentialtoprovideamoregeneralwaytoaccountforlinguisticvariationinNLP.Figure1givesaschematicofthemotivationforourapproach.Theword‘sick’typicallyhasanega-tivesentiment,e.g.,‘Iwouldliketobelievehe’ssickratherthanjustmeanandevil.’1However,insomecommunitiesthewordcanhaveapositivesentiment,e.g.,thelyric‘thissickbeat’,recentlytrademarkedbythemusicianTaylorSwift.2Givenlabeledex-amplesof‘sick’inusebyindividualsinasocialnetwork,weassumethatthewordwillhaveasimi-larsentimentmeaningfortheirnearneighbors—anassumptionoflinguistichomophilythatistheba-sisforthisresearch.Notethatthisdiffersfromtheassumptionoflabelhomophily,whichentailsthatneighborsinthenetworkwillholdsimilaropinions,andwillthereforeproducesimilardocument-levellabels(Tanetal.,2011;Huetal.,2013).Linguis-tichomophilyisamoregeneralizableclaim,whichcouldinprinciplebeappliedtoanylanguagepro-cessingtaskwhereauthornetworkinformationisavailable.Toscalethisbasicintuitiontodatasetswithtensofthousandsofuniqueauthors,wecompressthesocialnetworkintovectorrepresentationsofeachauthornode,usinganembeddingmethodforlarge1CharlesRangel,describingDickCheney2Inthecaseof‘sick’,speakerslikeTaylorSwiftmayem-ployeitherthepositiveandnegativemeanings,whilespeak-erslikeCharlesRangelemployonlythenegativemeaning.Inothercases,communitiesmaymaintaincompletelydistinctse-manticsforaword,suchastheterm‘pants’inAmericanandBritishEnglish.ThankstoChristopherPottsforsuggestingthisdistinctionandthisexample.Dataset#Positive#Negative#Neutral#TweetTrain20133,2301,2654,1098,604Dev20134772736141,364Test20131,5726011,6403,813Test20149822026691,853Test20151,0383659872,390Table1:StatisticsoftheSemEvalTwittersentimentdatasets.scalenetworks(Tangetal.,2015b).ApplyingthealgorithmtoFigure1,theauthorswithineachtriadwouldlikelybeclosertoeachotherthantoauthorsintheoppositetriad.Wethenincorporatetheseembeddingsintoanattention-basedneuralnetworkmodel,calledSOCIALATTENTION,whichemploysmultiplebasismodelstofocusondifferentregionsofthesocialnetwork.WeapplySOCIALATTENTIONtoTwittersenti-mentclassification,gatheringsocialnetworkmeta-dataforTwitterusersintheSemEvalTwittersen-timentanalysistasks(Nakovetal.,2013).Wefur-theradoptthesystemtoCiaoproductreviews(Tangetal.,2012),trainingauthorembeddingsusingtrustrelationshipsbetweenreviewers.SOCIALATTEN-TIONoffersa2-3%improvementoverrelatedneu-ralandensemblearchitecturesinwhichthesocialinformationisablated.ItalsooutperformsallpriorpublishedresultsontheSemEvalTwittertestsets.2DataIntheSemEvalTwittersentimentanalysistasks,thegoalistoclassifythesentimentofeachmessageaspositive,negative,orneutral.FollowingRosen-thaletal.(2015),wetrainandtuneoursystemsontheSemEvalTwitter2013traininganddevel-opmentdatasetsrespectively,andevaluateonthe2013–2015SemEvalTwittertestsets.StatisticsofthesedatasetsarepresentedinTable1.Ourtrain-inganddevelopmentdatasetslacksomeoftheorig-inalTwittermessages,whichmayhavebeendeletedsincethedatasetswereconstructed.However,ourtestdatasetscontainallthetweetsusedintheSe-mEvalevaluations,makingourresultscomparablewithpriorwork.Weconstructthreeauthorsocialnetworksbasedonthefollow,mention,andretweetrelationsbe-tweenthe7,438authorsinthetrainingdataset,

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
0
6
2
1
5
6
7
4
5
8

/

/
T

l

A
C
_
A
_
0
0
0
6
2
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
9
S
e
P
e
M
B
e
R
2
0
2
3

297

whichwereferasFOLLOWER,MENTIONandRETWEET.3Specifically,weusetheTwitterAPItocrawlthefriendsoftheSemEvalusers(individualsthattheyfollow)andthemostrecent3,200tweetsintheirtimelines.4Thementionandretweetlinksarethenextractedfromthetweettextandmetadata.Wetreatallsocialnetworksasundirectedgraphs,wheretwousersaresociallyconnectedifthereex-istsatleastonesocialrelationbetweenthem.3LinguisticHomophilyThehypothesisoflinguistichomophilyisthatso-ciallyconnectedindividualstendtouselanguagesimilarly,ascomparedtoarandomlyselectedpairofindividualswhoarenotsociallyconnected.Wenowdescribeapilotstudythatprovidessupportforthishypothesis,focusingonthedomainofsentimentanalysis.Thepurposeofthisstudyistotestwhethererrorsinsentimentanalysisareassortativeonthesocialnetworksdefinedintheprevioussection:thatis,iftwoindividuals(ich,J)areconnectedinthenet-work,thenaclassifiererroronisuggeststhaterrorsonjaremorelikely.Wetestthisideausingasimplelexicon-basedclassificationapproach,whichweapplytotheSe-mEvaltrainingdata,focusingonlyonmessagesthatarelabeledaspositiveornegative(ignoringtheneu-tralclass),andexcludingauthorswhocontributedmorethanonemessage(atinyminority).UsingthesocialmediasentimentlexiconsdefinedbyTangetal.(2014),5welabelamessageaspositiveifithasatleastasmanypositivewordsasnegativewords,andasnegativeotherwise.6Theassortativityisthefrac-tionofdyadsforwhichtheclassifiermakestwocor-rectpredictionsortwoincorrectpredictions(New-man,2003).Thismeasureswhetherclassificationerrorsareclusteredonthenetwork.Wecomparetheobservedassortativityagainsttheassortativityinanetworkthathasbeenrandomly3Wecouldnotgathertheauthorshipinformationof10%ofthetweetsinthetrainingdata,becausethetweetsoruserac-countshadbeendeletedbythetimewecrawledthesocialin-formation.4TheTwitterAPIreturnsamaximumof3,200tweets.5Thelexiconsincludewordsthatareassignedatleast0.99confidencebythemethodofTangetal.(2014):1,474positiveand1,956negativewordsintotal.6Tiesgotothepositiveclassbecauseitismorecommon.rewired.7Eachrewiringepochinvolvesanumberofrandomrewiringoperationsequaltothetotalnum-berofedgesinthenetwork.(Theedgesareran-domlyselected,soagivenedgemaynotberewiredineachepoch.)Bycountingthenumberofedgesthatoccurinboththeoriginalandrewirednetworks,weobservethatthisprocessconvergestoasteadystateafterthreeorfourepochs.AsshowninFig-ure2,theoriginalobservednetworkdisplaysmoreassortativitythantherandomlyrewirednetworksinnearlyeverycase.Thus,theTwittersocialnetworksdisplaymorelinguistichomophilythanwewouldexpectduetochancealone.Thedifferencesinassortativityacrossnetworktypesaresmall,indicatingthatnoneofthenetworksareclearlybest.Theretweetnetworkwasthemostdifficulttorewire,withthegreatestproportionofsharededgesbetweentheoriginalandrewirednet-works.Thismayexplainwhytheassortativitiesoftherandomlyrewirednetworkswereclosesttotheobservednetworkinthiscase.4ModelInthissection,wedescribeaneuralnetworkmethodthatleveragessocialnetworkinformationtoimprovetextclassification.Ourapproachisinspiredbyen-semblelearning,wherethesystempredictionistheweightedcombinationoftheoutputsofseveralba-sismodels.Weencourageeachbasismodeltofocusonalocalregionofthesocialnetwork,sothatclas-sificationonsociallyconnectedindividualsemployssimilarmodelcombinations.Givenasetofinstances{xi}andauthors{ai},thegoalofpersonalizedprobabilisticclassificationistoestimateaconditionallabeldistributionp(j|X,A).Formostauthors,nolabeleddataisavail-able,soitisimpossibletoestimatethisdistributiondirectly.Wethereforemakeasmoothnessassump-tionoverasocialnetworkG:individualswhoaresociallyproximateinGshouldhavesimilarclassi-fiers.Thisideaisputintopracticebymodelingtheconditionallabeldistributionasamixtureoverthe7Specifically,weusethedoubleedgeswapoperationofthenetworkxpackage(Hagbergetal.,2008).Thisopera-tionpreservesthedegreeofeachnodeinthenetwork.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
0
6
2
1
5
6
7
4
5
8

/

/
T

l

A
C
_
A
_
0
0
0
6
2
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
9
S
e
P
e
M
B
e
R
2
0
2
3

298

Figure2:Assortativityofobservedandrandomizednetworks.Eachrewiringepochperformsanumberofrewiringoperationsequaltothetotalnumberofedgesinthenetwork.Therandomlyrewirednetworksalmostalwaysdisplaylowerassortativitiesthantheoriginalnetwork,indicatingthattheaccuracyofthelexicon-basedsentimentanalyzerismoreassortativeontheobservedsocialnetworkthanonewouldexpectbychance.predictionsofKbasisclassifiers,P(j|X,A)=KXk=1Pr(Za=k|A,G)×p(j|X,Za=k).(1)Thebasisclassifiersp(j|X,Za=k)canbearbi-traryconditionaldistributions;weuseconvolutionalneuralnetworks,asdescribedin§4.2.Thecompo-nentweightingdistributionPr(Za=k|A,G)isconditionedonthesocialnetworkG,andfunctionsasanattentionalmechanism,describedin§4.1.ThebasicintuitionisthatforapairofauthorsaiandajwhoarenearbyinthesocialnetworkG,thepre-dictionrulesshouldbehavesimilarlyiftheatten-tionaldistributionsaresimilar,P(z|ai,G)≈p(z|aj,G).Ifwehavelabeleddataonlyforai,someofthepersonalizationfromthatdatawillbesharedbyaj.Theoverallclassificationapproachcanbeviewedasamixtureofexperts(Jacobsetal.,1991),leveragingthesocialnetworkassideinformationtochoosethedistributionoverexpertsforeachauthor.4.1SocialAttentionModelThegoalofthesocialattentionmodelistoassignsimilarbasisweightstoauthorswhoarenearbyinthesocialnetworkG.Weoperationalizesocialprox-imitybyembeddingeachnode’ssocialnetworkpo-sitionintoavectorrepresentation.Specifically,weemploytheLINEmethod(Tangetal.,2015b),whichestimatesD(v)dimensionalnodeembeddingsvaasparametersinaprobabilisticmodeloveredgesinthesocialnetwork.TheseembeddingsarelearnedsolelyfromthesocialnetworkG,withoutleveraginganytextualinformation.Theattentionalweightsarethencomputedfromtheembeddingsusingasoft-maxlayer,Pr(Za=k|A,G)=exp(cid:0)φ>kva+bk(cid:1)PKk0exp(cid:0)φ>k0va+bk0(cid:1).(2)Thisembeddingmethodusesonlysingle-relationalnetworks;intheevaluation,wewillshowresultsforTwitternetworksbuiltfromnetworksoffollow,mention,andretweetrelations.Infuturework,wemayconsidercombiningalloftheserela-tiontypesintoaunifiedmulti-relationalnetwork.Itispossiblethatembeddingsinsuchanetworkcouldbeestimatedusingtechniquesborrowedfrommulti-relationalknowledgenetworks(Bordesetal.,2014;Wangetal.,2014).4.2SentimentClassificationwithConvolutionalNeuralNetworksWenextdescribethebasismodels,P(j|X,Z=k).Becauseourtargettaskisclassificationonmicrotextdocuments,wemodelthisdistributionusingconvo-lutionalneuralnetworks(CNNs;Lecunetal.,1989),whichhavebeenproventoperformwellonsentenceclassificationtasks(Kalchbrenneretal.,2014;Kim,2014).CNNsapplylayersofconvolvingfilterston-grams,therebygeneratingavectorofdenselo-calfeatures.CNNsimproveupontraditionalbag-of-wordsmodelsbecauseoftheirabilitytocapturewordorderinginformation.Letx=[h1,h2,···,hn]betheinputsentence,wherehiistheD(w)dimensionalwordvectorcor-respondingtothei-thwordinthesentence.Weuse

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
0
6
2
1
5
6
7
4
5
8

/

/
T

l

A
C
_
A
_
0
0
0
6
2
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
9
S
e
P
e
M
B
e
R
2
0
2
3

299

oneconvolutionallayerandonemaxpoolinglayertogeneratethesentencerepresentationofx.Theconvolutionallayerinvolvesfiltersthatareappliedtobigramstoproducefeaturemaps.Formally,giventhebigramwordvectorshi,hi+1,thefeaturesgen-eratedbymfilterscanbecomputedbyci=tanh(WLhi+WRhi+1+b),(3)whereciisanmdimensionalvector,WLandWRarem×D(w)projectionmatrices,andbisthebiasvector.Themdimensionalvectorrepresentationofthesentenceisgivenbythepoolingoperations=maxi∈1,···,n−1ci.(4)Toobtaintheconditionallabelprobability,weuti-lizeamulticlasslogisticregressionmodel,Pr(Y=t|X,Z=k)=exp(β>tsk+βt)PTt0=1exp(β>t0sk+βt0),(5)whereβtisanmdimensionalweightvector,βtisthecorrespondingbiasterm,andskisthemdimen-sionalsentencerepresentationproducedbythek-thbasismodel.4.3TrainingWefixthepretrainedauthorandwordembeddingsduringtrainingoursocialattentionmodel.LetΘdenotetheparametersthatneedtobelearned,whichinclude{WL,WR,B,{βt,βt}Tt=1}forev-erybasisCNNmodel,andtheattentionalweights{φk,bk}Kk=1.Weminimizethefollowinglogisticlossobjectiveforeachtraininginstance:„(Θ)=−TXt=11[Y∗=t]logPr(Y=t|X,A),(6)whereY∗isthegroundtruthclassforx,and1[·]representsanindicatorfunction.Wetrainthemod-elsforbetween10and15epochsusingtheAdamoptimizer(KingmaandBa,2014),withearlystop-pingonthedevelopmentset.4.4InitializationOnepotentialproblemisthatafterinitialization,asmallnumberofbasismodelsmayclaimmostofthemixtureweightsforalltheusers,whileotherbasismodelsareinactive.Thiscanoccurbecausesomebasismodelsmaybeinitializedwithparametersthataregloballysuperior.Asaresult,the“dead”ba-sismodelswillreceivenear-zerogradientupdates,andthereforecanneverimprove.ThetruemodelcapacitycantherebybesubstantiallylowerthantheKassignedexperts.Ideally,deadbasismodelswillbeavoidedbe-causeeachbasismodelshouldfocusonauniqueregionofthesocialnetwork.Toensurethatthishappens,wepretrainthebasismodelsusinganin-stanceweightingapproachfromthedomainadapta-tionliterature(JiangandZhai,2007).Foreachbasismodelk,eachauthorahasaninstanceweightαa,k.Theseinstanceweightsarebasedontheauthor’sso-cialnetworknodeembedding,sothatsociallyprox-imateauthorswillhavehighweightsforthesamebasismodels.Thisisensuredbyendowingeachba-sismodelwitharandomvectorγk∼N(0,σ2I),andsettingtheinstanceweightsas,αa,k=sigmoid(γ>kva).(7)Thesimpledesignresultsinsimilarinstanceweightsforsociallyproximateauthors.Duringpre-training,wetrainthek-thbasismodelbyoptimizingthefollowinglossfunctionforeveryinstance:‘k=−αa,kTXt=11[Y∗=t]logPr(Y=t|X,Za=k).(8)Thepretrainedbasismodelsarethenassembledto-getherandjointlytrainedusingEquation6.5ExperimentsOurmainevaluationfocusesonthe2013–2015SemEvalTwittersentimentanalysistasks.Thedatasetshavebeendescribedin§2.WetrainandtuneoursystemsontheTrain2013andDev2013datasetsrespectively,andevaluateontheTest2013–2015sets.Inaddition,weevaluateonanotherdatasetbasedonCiaoproductreviews(Tangetal.,2012).5.1SocialNetworkExpansionWeutilizeTwitter’sfollower,mention,andretweetsocialnetworkstotrainuserembeddings.Byquery-ingtheTwitterAPIinApril2015,wewereable

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
0
6
2
1
5
6
7
4
5
8

/

/
T

l

A
C
_
A
_
0
0
0
6
2
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
9
S
e
P
e
M
B
e
R
2
0
2
3

300

Network#Author#RelationFOLLOWER+18,2811,287,260MENTION+25,0071,403,369RETWEET+35,3762,194,319Table2:Statisticsoftheauthorsocialnetworksusedfortrainingauthorembeddings.toidentify15,221authorsforthetweetsintheSe-mEvaldatasetsdescribedabove.Weinduceso-cialnetworksfortheseindividualsbycrawlingtheirfriendlinksandtimelines,asdescribedin§2.Un-fortunately,thesenetworksarerelativelysparse,withalargeamountofisolatedauthornodes.Toimprovethequalityoftheauthorembeddings,weexpandthesetofauthornodesbyaddingnodesthatdothemosttodensifytheauthornetworks:forthefollowernetwork,weaddadditionalindividu-alsthatarefollowedbyatleastahundredauthorsintheoriginalset;forthementionandretweetnet-works,weaddallusersthathavebeenmentionedorretweetedbyatleasttwentyauthorsintheorigi-nalset.ThestatisticsoftheresultingnetworksarepresentedinTable2.5.2ExperimentalSettingsWeemploythepretrainedwordembeddingsusedbyAstudilloetal.(2015),whicharetrainedwithacor-pusof52milliontweets,andhavebeenshowntoperformverywellonthistask.Theembeddingsarelearnedusingthestructuredskip-grammodel(Lingetal.,2015),andtheembeddingdimensionissetat600,followingAstudilloetal.(2015).Were-portthesameevaluationmetricastheSemEvalchal-lenge:theaverageF1scoreofpositiveandnegativeclasses.8CompetitivesystemsWeconsiderfivecompeti-tiveTwittersentimentclassificationmethods.Con-volutionalneuralnetwork(CNN)hasbeende-scribedin§4.2,andisthebasismodelofSOCIALATTENTION.MixtureofexpertsemploysthesameCNNmodelasanexpert,butthemixturedensi-8Regardingtheneutralclass:systemsarepenalizedwithfalsepositiveswhenneutraltweetsareincorrectlyclassifiedaspositiveornegative,andwithfalsenegativeswhenpositiveornegativetweetsareincorrectlyclassifiedasneutral.Thisfol-lowstheevaluationprocedureoftheSemEvalchallenge.tiessolelydependontheinputvalues.Weadoptthesummationofthepretrainedwordembeddingsasthesentence-levelinputtolearnthegatingfunc-tion.9ThemodelarchitectureofrandomattentionisnearlyidenticaltoSOCIALATTENTION:theonlydistinctionisthatwereplacethepretrainedauthorembeddingswithrandomembeddingvectors,draw-inguniformlyfromtheinterval(−0.25,0.25).Con-catenationconcatenatestheauthorembeddingwiththesentencerepresentationobtainedfromCNN,andthenfeedsthenewrepresentationtoasoftmaxclas-sifier.Finally,weincludeSOCIALATTENTION,theattention-basedneuralnetworkmethoddescribedin§4.Wealsocompareagainstthethreetop-performingsystemsintheSemEval2015Twittersentimentanalysischallenge(Rosenthaletal.,2015):WE-BIS(Hagenetal.,2015),UNITN(SeverynandMos-chitti,2015),andLSISLIF(Hamdanetal.,2015).UNITNachievesthebestaverageF1scoreonTest2013–2015setsamongallthesubmittedsystems.Finally,werepublishresultsofNLSE(Astudilloetal.,2015),anon-linearsubspaceembeddingmodel.ParametertuningWetuneallthehyperparam-etersontheSemEval2013developmentset.WechoosethenumberofbigramfiltersfortheCNNmodelsfrom{50,100,150}.Thesizeofauthorembeddingsisselectedfrom{50,100}.Formix-tureofexperts,randomattentionandSOCIALAT-TENTION,wecomparearangeofnumbersofba-sismodels,{3,5,10,15}.Wefoundthatarela-tivelysmallnumberofbasismodelsareusuallysuf-ficienttoachievegoodperformance.Thenumberofpretrainingepochsisselectedfrom{1,2,3}.Dur-ingjointtraining,wechecktheperformanceonthedevelopmentsetaftereachepochtoperformearlystopping.5.3ResultsTable3summarizesthemainempiricalfindings,wherewereportresultsobtainedfromauthorem-beddingstrainedonRETWEET+networkforSO-CIALATTENTION.TheresultsofdifferentsocialnetworksforSOCIALATTENTIONareshowninTa-ble4.Thebesthyperparametersare:100bigram9Thesummationofthepretrainedwordembeddingsworksbetterthantheaverageofthewordembeddings.

l

D
Ö
w
N
Ö
A
D
e
D

F
R
Ö
M
H

T
T

P

:
/
/

D
ich
R
e
C
T
.

M

ich
T
.

e
D
u

/
T

A
C
l
/

l

A
R
T
ich
C
e

P
D

F
/

D
Ö

ich
/

.

1
0
1
1
6
2

/
T

l

A
C
_
A
_
0
0
0
6
2
1
5
6
7
4
5
8

/

/
T

l

A
C
_
A
_
0
0
0
6
2
P
D

.

F

B
j
G
u
e
S
T

T

Ö
N
0
9
S
e
P
e
M
B
e
R
2
0
2
3

301

SystemTest2013Test2014Test2015AverageOurimplementationsCNN69.3172.7363.2468.43Mixtureofexperts68.9772.0764.28*68.44Randomattention69.4871.5664.37*68.47Concatenation69.8071.9663.8068.52SOCIALATTENTION71.91*75.07*66.75*71.24ReportedresultsNLSE72.0973.6465.2170.31WEBIS68.4970.8664.8468.06UNITN72.7973.6064.5970.33LSISLIF71.3471.5464.2769.05Table3:AverageF1scoreontheSemEvaltestsets.Thebestresultsareinbold.Resultsaremarkedwith*iftheyaresignificantlybetterthanCNNatp<0.05.SemEvalTestNetwork201320142015AverageFOLLOWER+71.4974.1766.0070.55MENTION+71.7274.1466.2770.71RETWEET+71.9175.0766.7571.24Table4:ComparisonofdifferentsocialnetworkswithSOCIALATTENTION.Thebestresultsareinbold.filters;100-dimensionalauthorembeddings;K=5basismodels;1pre-trainingepoch.Toestablishthestatisticalsignificanceoftheresults,weobtain100bootstrapsamplesforeachtestset,andcomputetheF1scoreoneachsampleforeachalgorithm.Atwo-tailpairedt-testisthenappliedtodetermineiftheF1scoresoftwoalgorithmsaresignificantlydifferent,p<0.05.Mixtureofexperts,randomattention,andCNNallachievesimilaraverageF1scoresontheSemEvalTwitter2013–2015testsets.Notethatrandomat-tentioncanbenefitfromsomeofthepersonalizedinformationencodedintherandomauthorembed-dings,asTwittermessagespostedbythesameau-thorsharethesameattentionalweights.However,itbarelyimprovestheresults,becausethemajorityofauthorscontributeasinglemessageintheSemEvaldatasets.Withtheincorporationofauthorsocialnet-workinformation,concatenationslightlyimprovestheclassificationperformance.Finally,SOCIALAT-TENTIONgivesmuchbetterresultsthanconcatena-tion,asitisabletomodeltheinteractionsbetweentextrepresentationsandauthorrepresentations.ItsignificantlyoutperformsCNNonalltheSemEvaltestsets,yielding2.8%improvementonaverageF1score.SOCIALATTENTIONalsoperformssubstan-tiallybetterthanthetop-performingSemEvalsys-temsandNLSE,especiallyonthe2014and2015testsets.Wenowturntoacomparisonofthesocialnet-works.AsshowninTable4,theRETWEET+net-workisthemosteffective,althoughthedifferencesaresmall:SOCIALATTENTIONoutperformspriorworkregardlessofwhichnetworkisselected.Twit-ter’s“following”relationisarelativelylow-costformofsocialengagement,anditislesspublicthanretweetingormentioninganotheruser.Thusitisunsurprisingthatthefollowernetworkisleastusefulforsocially-informedpersonalization.TheRETWEET+networkhasdensersocialconnectionsthanMENTION+,whichcouldleadtobetterauthorembeddings.5.4AnalysisWenowinvestigatewhetherlanguagevariationinsentimentmeaninghasbeencapturedbydifferentbasismodels.Wefocusonthesamesentimentwords(Tangetal.,2014)thatweusedtotestlin-guistichomophilyinouranalysis.Weareinter-estedtodiscoversentimentwordsthatareusedwiththeoppositesentimentmeaningsbysomeauthors.Tomeasurethelevelofmodel-specificityforeach l D o w n o a d e d f r o m h t t p : / / D ich R e C T . M ich T . e d u / t a c l / l A R T ich C e - P D F / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 0 6 2 1 5 6 7 4 5 8 / / t l a c _ a _ 0 0 0 6 2 P D . F B j G u e S T T O N 0 9 S e P e M B e R 2 0 2 3 302 BasismodelMorepositiveMorenegative1banginglossfeverbrokenfuckingdearlikegodyeahwow2chillingcoldillsicksucksatisfytrustwealthstronglmao3assdamnpissbitchshittalenthonestlyvotingwinclever4insanebawlingfeverweirdcrylmaosuperlolhahahahaha5ruinsillybadboringdreadfullovaticswishbeliebersarianatorskendallTable5:Top5morepositive/negativewordsforthebasismodelsintheSemEvaltrainingdata.Boldedentriescorrespondtowordsthatareoftenusedironically,bytopauthorsrelatedtobasismodel1and4.Underlinedentriesareswearwords,whicharesometimesusedpositivelybytopuserscorrespondingtobasismodel3.Italicentriesrefertocelebritiesandtheirfans,whichusuallyappearinnegativetweetsbytopauthorsforbasismodel5.WordSentimentExamplesickpositiveWatchESPNtonighttoseemeburning@userforasickgoalonthetopten.#realbackyardFIFAbitchpositive@userbitchushouldacamewithmeSaturdaysooooomuchfun.MetRomeosantoslmaonaimethislookalikeshitpositive@userwellshit!Ihopeyourbackforthemorningshow.IneedyouonmydrivetoCupertinoonMonday!Havefun!dearnegativeDearSpurs,YouareoutofCOC,notinChampionsLeagueandcomeMaywontbeintop4.Whydoyouevenexist?wownegativeWow.Tigerfiresa63butnotgoodenough.NickWatneyshootsa59ifhebirdiesthe18th?!?#sicklolnegativeLolsuperawkwardifitshellafoggyatRimtomorrowandthegamessupposetobeontvlolUhhhh..Where’stheball?LolTable6:TweetexamplesthatcontainsentimentwordsconveyingspecificsentimentmeaningsthatdifferfromtheircommonsensesintheSemEvaltrainingdata.ThesentimentlabelsareadoptedfromtheSemEvalannotations.wordw,wecomputethedifferencebetweenthemodel-specificprobabilitiesp(j|X=w,Z=k)andtheaverageprobabilitiesofallbasismodels1KPKk=1p(j|X=w,Z=k)forpositiveandneg-ativeclasses.Thefivewordsinthenegativeandpos-itivelexiconswiththehighestscoresforeachmodelarepresentedinTable5.AsshowninTable5,Twitteruserscorrespond-ingtobasismodels1and4oftenusesomewordsironicallyintheirtweets.Basismodel3tendstoassignpositivesentimentpolaritytoswearwords,andTwitterusersrelatedtobasismodel5seemtobelessfondoffansofcertaincelebrities.Finally,basismodel2identifiesTwitterusersthatwehavedescribedintheintroduction—theyoftenadoptgen-eralnegativewordslike‘ill’,‘sick’,and‘suck’posi-tively.ExamplescontainingsomeofthesewordsareshowninTable6.5.5SentimentAnalysisofProductReviewsThelabeleddatasetsforTwittersentimentanalysisarerelativelysmall;toevaluateourmethodonalargerdataset,weutilizeaproductreviewdatasetbyTangetal.(2012).Thedatasetconsistsof257,682reviewswrittenby10,569userscrawledfromapopularproductreviewsites,Ciao.10Theratinginformationindiscretefive-starrangeisavail-ableforthereviews,whichistreatedasthegroundtruthlabelinformationforthereviews.Moreover,theusersofthissitecanmarkexplicit“trust”rela-tionshipswitheachother,creatingasocialnetwork.Toselectexamplesfromthisdataset,wefirstre-movedreviewsthatweremarkedbyreadersas“notuseful.”Wetreatedreviewswithmorethanthreestarsaspositive,andlessthanthreestarsasnega-tive;reviewswithexactlythreestarswereremoved.10http://www.ciao.co.uk l D o w n o a d e d f r o m h t t p : / / D ich R e C T . M ich T . e d u / t a c l / l A R T ich C e - P D F / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 0 6 2 1 5 6 7 4 5 8 / / t l a c _ a _ 0 0 0 6 2 P D . F B j G u e S T T O N 0 9 S e P e M B e R 2 0 2 3 303 Dataset#Author#Positive#Negative#ReviewTrainCiao8,54563,0476,95370,000DevCiao4,0879,05294810,000TestCiao5,74017,9782,02220,000Total9,26790,0779,923100,000Table7:StatisticsoftheCiaoproductreviewdatasets.SystemTestCiaoCNN78.43Mixtureofexperts78.37Randomattention79.43*Concatenation77.99SOCIALATTENTION80.19**Table8:AverageF1scoreontheCiaotestset.Thebestresultsareinbold.Resultsaremarkedwith*and**iftheyaresignificantlybetterthanCNNandrandomatten-tionrespectively,atp<0.05.Wethensampled100,000reviewsfromthisset,andsplitthemrandomlyintotraining(70%),develop-ment(10%)andtestsets(20%).ThestatisticsoftheresultingdatasetsarepresentedinTable7.Weutilize145,828trustrelationsbetween18,999Ciaouserstotraintheauthorembeddings.Weconsiderthe10,000mostfrequentwordsinthedatasets,andassignthempretrainedword2vecembeddings.11AsshowninTable7,thedatasetshavehighlyskewedclassdistributions.Thus,weusetheaverageF1scoreofpositiveandnegativeclassesastheevalu-ationmetic.TheevaluationresultsarepresentedinTable8.ThebesthyperparametersaregenerallythesameasthoseforTwittersentimentanalysis,exceptthattheoptimalnumberofbasismodelsis10,andtheop-timalnumberofpretrainingepochsis2.MixtureofexpertsandconcatenationobtainslightlyworseF1scoresthanthebaselineCNNsystem,butran-domattentionperformssignificantlybetter.Incon-trasttotheSemEvaldatasets,individualusersof-tencontributemultiplereviewsintheCiaodatasets(theaveragenumberofreviewsfromanauthoris10.8;Table7).Asanauthortendstoexpresssimilaropinionstowardrelatedproducts,randomattention11https://code.google.com/archive/p/word2vecisabletoleveragethepersonalizedinformationtoimprovesentimentanalysis.Priorworkhasinves-tigatedthedirection,obtainingpositiveresultsus-ingspeakeradaptationtechniques(AlBonietal.,2015).Finally,byexploitingthesocialnetworkoftrustrelations,SOCIALATTENTIONobtainsfurtherimprovements,outperformingrandomattentionbyasmallbutsignificantmargin.6RelatedWorkDomainadaptationandpersonalizationDo-mainadaptationisaclassicapproachtohandlingthevariationinherentinsocialmediadata(Eisen-stein,2013).Earlyapproachestosuperviseddo-mainadaptationfocusedonadaptingtheclassifierweightsacrossdomains,usingenhancedfeaturespaces(Daum´eIII,2007)orBayesianpriors(ChelbaandAcero,2006;FinkelandManning,2009).Re-centworkfocusesonunsuperviseddomainadap-tation,whichtypicallyworksbytransformingtheinputfeaturespacesoastoovercomedomaindif-ferences(Blitzeretal.,2006).However,inmanycases,thedatahasnonaturalpartitioningintodo-mains.Inpreliminarywork,weconstructedsocialnetworkdomainsbyrunningcommunitydetectionalgorithmsontheauthorsocialnetwork(Fortunato,2010).However,thesealgorithmsprovedtobeun-stableonthesparsenetworksobtainedfromsocialmediadatasets,andofferedminimalperformanceimprovements.Inthispaper,weconvertsocialnet-workpositionsintonodeembeddings,anduseanattentionalcomponenttosmooththeclassificationruleacrosstheembeddingspace.Personalizationhasbeenanactiveresearchtopicinareassuchasspeechrecognitionandinformationretrieval.Standardtechniquesforthesetasksincludelineartransformationofmodelparameters(Legget-terandWoodland,1995)andcollaborativefilter-ing(Breeseetal.,1998).Thesemethodshavere-centlybeenadaptedtopersonalizedsentimentanal-ysis(Tangetal.,2015a;AlBonietal.,2015).Su-pervisedpersonalizationtypicallyrequireslabeledtrainingexamplesforeveryindividualuser.Incon-trast,byleveragingthesocialnetworkstructure,wecanobtainpersonalizationevenwhenlabeleddataisunavailableformanyauthors. l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / t a c l / l a r t i c e - p d f / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 0 6 2 1 5 6 7 4 5 8 / / t l a c _ a _ 0 0 0 6 2 p d . f b y g u e s t t o n 0 9 S e p e m b e r 2 0 2 3 304 SentimentanalysiswithsocialrelationsPrevi-ousworkonincorporatingsocialrelationsintosen-timentclassificationhasreliedonthelabelconsis-tencyassumption,wheretheexistenceofsocialcon-nectionsbetweenusersistakenasacluethatthesentimentpolaritiesoftheusers’messagesshouldbesimilar.Speriosuetal.(2011)constructahetero-geneousnetworkwithtweets,users,andn-gramsasnodes.Eachnodeisthenassociatedwithasenti-mentlabeldistribution,andtheselabeldistributionsaresmoothedbylabelpropagationoverthegraph.SimilarapproachesareexploredbyHuetal.(2013),whoemploythegraphLaplacianasasourceofreg-ularization,andbyTanetal.(2011)whotakeafac-torgraphapproach.Arelatedideaistolabelthesentimentofindividualsinasocialnetworktowardseachother:Westetal.(2014)exploitthesociolog-icaltheoryofstructuralbalancetoimprovetheac-curacyofdyadicsentimentlabelsinthissetting.Alloftheseeffortsarebasedontheintuitionthatindi-vidualpredictionsp(y)shouldbesmoothacrossthenetwork.Incontrast,ourworkisbasedonthein-tuitionthatsocialneighborsuselanguagesimilarly,sotheyshouldhaveasimilarconditionaldistribu-tionp(y|x).Theseintuitionsarecomplementary:ifbothholdforaspecificsetting,thenlabelconsis-tencyandlinguisticconsistencycouldinprinciplebecombinedtoimproveperformance.Socialrelationscanalsobeappliedtoimprovepersonalizedsentimentanalysis(Songetal.,2015;WuandHuang,2015).Songetal.(2015)presentalatentfactormodelthatalleviatesthedatasparsityproblembydecomposingthemessagesintowordsthatarerepresentedbytheweightedsentimentandtopicunits.Socialrelationsarefurtherincorporatedintothemodelbasedontheintuitionthatlinkedin-dividualssharesimilarinterestswithrespecttothelatenttopics.WuandHuang(2015)buildaperson-alizedsentimentclassifierforeachauthor;sociallyconnectedusersareencouragedtohavesimilaruser-specificclassifiercomponents.Asdiscussedabove,themainchallengeinpersonalizedsentimentanaly-sisistoobtainlabeleddataforeachindividualau-thor.Bothpapersemploydistantsupervision,usingemoticonstolabeladditionalinstances.However,emoticonsmaybeunavailableforsomeauthorsorevenforentiregenres,suchasreviews.Further-more,thepragmaticfunctionofemoticonsiscom-plex,andinmanycasesemoticonsdonotrefertosentiment(WaltherandD’Addario,2001).Ourap-proachdoesnotrelyondistantsupervision,andas-sumesonlythattheclassificationdecisionfunctionshouldbesmoothacrossthesocialnetwork.7ConclusionThispaperpresentsanewmethodforlearningtoovercomelanguagevariation,leveragingtheten-dencyofsociallyproximateindividualstouselan-guagesimilarly—thephenomenonoflinguisticho-mophily.Bylearningbasismodelsthatfocusondifferentlocalregionsofthesocialnetwork,ourmethodisabletocapturesubtleshiftsinmeaningacrossthenetwork.Inspiredbyensemblelearn-ing,wehaveformulatedthismodelbyemployingasocialattentionmechanism:thefinalpredictionistheweightedcombinationoftheoutputsoftheba-sismodels,andeachauthorhasauniqueweight-ing,dependingontheirpositioninthesocialnet-work.Ourmodelachievessignificantimprovementsoverstandardconvolutionalnetworks,andablationanalysesshowthatsocialnetworkinformationisthecriticalingredient.Inotherwork,languagevaria-tionhasbeenshowntoposeproblemsfortheentireNLPstack,frompart-of-speechtaggingtoinforma-tionextraction.Akeyquestionforfutureresearchiswhetherwecanlearnasocially-infusedensemblethatisusefulacrossmultipletasks.8AcknowledgmentsWethankDuenHorng“Polo”Chaufordiscus-sionsaboutcommunitydetectionandRamonAs-tudilloforsharingdataandhelpingustoreproducetheNLSEresults.ThisresearchwassupportedbytheNationalScienceFoundationunderawardRI-1452443,bytheNationalInstitutesofHealthun-derawardnumberR01GM112697-01,andbytheAirForceOfficeofScientificResearch.Thecontentissolelytheresponsibilityoftheauthorsanddoesnotnecessarilyrepresenttheofficialviewsofthesesponsors.ReferencesMohammadAlBoni,KeiraQiZhou,HongningWang,andMatthewSGerber.2015.Modeladaptationfor l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / t a c l / l a r t i c e - p d f / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 0 6 2 1 5 6 7 4 5 8 / / t l a c _ a _ 0 0 0 6 2 p d . f b y g u e s t t o n 0 9 S e p e m b e r 2 0 2 3 305 personalizedopinionanalysis.InProceedingsoftheAssociationforComputationalLinguistics(ACL).RamonFAstudillo,SilvioAmir,WangLing,M´arioSilva,andIsabelTrancoso.2015.Learningwordrep-resentationsfromscarceandnoisydatawithembed-dingsub-spaces.InProceedingsoftheAssociationforComputationalLinguistics(ACL).AiTiAw,MinZhang,JuanXiao,andJianSu.2006.Aphrase-basedstatisticalmodelforSMStextnormaliza-tion.InProceedingsoftheAssociationforComputa-tionalLinguistics(ACL).JustinBasilicoandThomasHofmann.2004.Unify-ingcollaborativeandcontent-basedfiltering.InPro-ceedingsoftheInternationalConferenceonMachineLearning(ICML).JohnBlitzer,RyanMcDonald,andFernandoPereira.2006.Domainadaptationwithstructuralcorrespon-dencelearning.InProceedingsofEmpiricalMethodsforNaturalLanguageProcessing(EMNLP).SuLinBlodgett,LisaGreen,andBrendanO’Connor.2016.Demographicdialectalvariationinsocialme-dia:Acasestudyofafrican-americanenglish.InPro-ceedingsofEmpiricalMethodsforNaturalLanguageProcessing(EMNLP).AntoineBordes,NicolasUsunier,AlbertoGarcia-Duran,JasonWeston,andOksanaYakhnenko.2014.Trans-latingembeddingsformodelingmulti-relationaldata.InNeuralInformationProcessingSystems(NIPS).JohnSBreese,DavidHeckerman,andCarlKadie.1998.Empiricalanalysisofpredictivealgorithmsforcollab-orativefiltering.InProceedingsofUncertaintyinAr-tificialIntelligence(UAI).JohnBryden,SebastianFunk,andVincentJansen.2013.Wordusagemirrorscommunitystructureintheonlinesocialnetworktwitter.EPJDataScience,2(1).CiprianChelbaandAlexAcero.2006.Adaptationofmaximumentropycapitalizer:Littledatacanhelpalot.ComputerSpeech&Language,20(4).HalDaum´eIII.2007.Frustratinglyeasydomainadapta-tion.InProceedingsoftheAssociationforComputa-tionalLinguistics(ACL).PenelopeEckertandSallyMcConnell-Ginet.2003.Lan-guageandGender.CambridgeUniversityPress.JacobEisenstein,BrendanO’Connor,NoahA.Smith,andEricP.Xing.2010.Alatentvariablemodelforgeographiclexicalvariation.InProceedingsofEmpiricalMethodsforNaturalLanguageProcessing(EMNLP).JacobEisenstein.2013.Whattodoaboutbadlanguageontheinternet.InProceedingsoftheNorthAmericanChapteroftheAssociationforComputationalLinguis-tics(NAACL).JacobEisenstein.2015.Systematicpatterninginphonologically-motivatedorthographicvariation.JournalofSociolinguistics,19.MarcelloFederico.1996.Bayesianestimationmethodsforn-gramlanguagemodeladaptation.InProceed-ingsofInternationalConferenceonSpokenLanguage(ICSLP).JennyR.FinkelandChristopherManning.2009.Hier-archicalbayesiandomainadaptation.InProceedingsoftheNorthAmericanChapteroftheAssociationforComputationalLinguistics(NAACL).JohnLFischer.1958.Socialinfluencesonthechoiceofalinguisticvariant.Word,14.SantoFortunato.2010.Communitydetectioningraphs.PhysicsReports,486(3).LisaJ.Green.2002.AfricanAmericanEnglish:ALin-guisticIntroduction.CambridgeUniversityPress.AricA.Hagberg,DanielASchult,andPSwart.2008.Exploringnetworkstructure,dynamics,andfunctionusingnetworkx.InProceedingsofthe7thPythoninScienceConferences(SciPy).MatthiasHagen,MartinPotthast,MichaelB¨uchner,andBennoStein.2015.Webis:Anensemblefortwittersentimentdetection.InProceedingsofthe9thInter-nationalWorkshoponSemanticEvaluation.HussamHamdan,PatriceBellot,andFredericBechet.2015.lsislif:Featureextractionandlabelweightingforsentimentanalysisintwitter.InProceedingsofthe9thInternationalWorkshoponSemanticEvaluation.DirkHovy.2015.Demographicfactorsimproveclassifi-cationperformance.InProceedingsoftheAssociationforComputationalLinguistics(ACL).XiaHu,LeiTang,JiliangTang,andHuanLiu.2013.Ex-ploitingsocialrelationsforsentimentanalysisinmi-croblogging.InProceedingsofWebSearchandDataMining(WSDM).BernardoHuberman,DanielM.Romero,andFangWu.2008.Socialnetworksthatmatter:Twitterunderthemicroscope.FirstMonday,14(1).RobertAJacobs,MichaelIJordan,StevenJNowlan,andGeoffreyEHinton.1991.Adaptivemixturesoflocalexperts.Neuralcomputation,3(1).JingJiangandChengXiangZhai.2007.Instanceweight-ingfordomainadaptationinNLP.InProceedingsoftheAssociationforComputationalLinguistics(ACL).NalKalchbrenner,EdwardGrefenstette,andPhilBlun-som.2014.Aconvolutionalneuralnetworkformod-ellingsentences.InProceedingsoftheAssociationforComputationalLinguistics(ACL).YoonKim.2014.Convolutionalneuralnetworksforsentenceclassification.InProceedingsofEmpiricalMethodsforNaturalLanguageProcessing(EMNLP). l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / t a c l / l a r t i c e - p d f / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 0 6 2 1 5 6 7 4 5 8 / / t l a c _ a _ 0 0 0 6 2 p d . f b y g u e s t t o n 0 9 S e p e m b e r 2 0 2 3 306 DiederikKingmaandJimmyBa.2014.Adam:Amethodforstochasticoptimization.arXivpreprintarXiv:1412.6980.WilliamLabov.1963.Thesocialmotivationofasoundchange.Word,19(3).YannLeCun,BernhardBoser,JohnSDenker,DonnieHenderson,RichardEHoward,WayneHubbard,andLawrenceDJackel.1989.Backpropagationappliedtohandwrittenzipcoderecognition.Neuralcomputa-tion,1(4).ChristopherJLeggetterandPhilipCWoodland.1995.Maximumlikelihoodlinearregressionforspeakeradaptationofcontinuousdensityhiddenmarkovmod-els.ComputerSpeech&Language,9(2).WangLing,ChrisDyer,AlanBlack,andIsabelTrancoso.2015.Two/toosimpleadaptationsofword2vecforsyntaxproblems.InProceedingsoftheNorthAmeri-canChapteroftheAssociationforComputationalLin-guistics(NAACL).MillerMcPherson,LynnSmith-Lovin,andJamesMCook.2001.Birdsofafeather:Homophilyinsocialnetworks.Annualreviewofsociology.PreslavNakov,ZornitsaKozareva,AlanRitter,SaraRosenthal,VeselinStoyanov,andTheresaWilson.2013.Semeval-2013task2:Sentimentanalysisintwitter.InProceedingsofthe7thInternationalWork-shoponSemanticEvaluation.MarkEJNewman.2003.Thestructureandfunctionofcomplexnetworks.SIAMreview,45(2).SaraRosenthalandKathleenMcKeown.2011.Agepre-dictioninblogs:Astudyofstyle,content,andonlinebehaviorinpre-andPost-Socialmediagenerations.InProceedingsoftheAssociationforComputationalLin-guistics(ACL).SaraRosenthal,PreslavNakov,SvetlanaKiritchenko,SaifMMohammad,AlanRitter,andVeselinStoy-anov.2015.Semeval-2015task10:Sentimentanaly-sisintwitter.InProceedingsofthe9thInternationalWorkshoponSemanticEvaluation.AliakseiSeverynandAlessandroMoschitti.2015.Unitn:Trainingdeepconvolutionalneuralnetworkfortwittersentimentclassification.InProceedingsofthe9thInternationalWorkshoponSemanticEvaluation.XuehuaShen,BinTan,andChengXiangZhai.2005.Im-plicitusermodelingforpersonalizedsearch.InPro-ceedingsoftheInternationalConferenceonInforma-tionandKnowledgeManagement(CIKM).KaisongSong,ShiFeng,WeiGao,DalingWang,GeYu,andKam-FaiWong.2015.Personalizedsenti-mentclassificationbasedonlatentindividualityofmi-croblogusers.InProceedingsofthe24thInterna-tionalJointConferenceonArtificialIntelligence(IJ-CAI).MichaelSperiosu,NikitaSudan,SidUpadhyay,andJa-sonBaldridge.2011.Twitterpolarityclassificationwithlabelpropagationoverlexicallinksandthefol-lowergraph.InProceedingsofEmpiricalMethodsforNaturalLanguageProcessing(EMNLP).R.Sproat,A.W.Black,S.Chen,S.Kumar,M.Osten-dorf,andC.Richards.2001.Normalizationofnon-standardwords.ComputerSpeech&Language,15(3).ChenhaoTan,LillianLee,JieTang,LongJiang,MingZhou,andPingLi.2011.User-levelsentimentanal-ysisincorporatingsocialnetworks.InProceedingsofKnowledgeDiscoveryandDataMining(KDD).JiliangTang,HuijiGao,andHuanLiu.2012.mtrust:discerningmulti-facetedtrustinaconnectedworld.InProceedingsofWebSearchandDataMining(WSDM).DuyuTang,FuruWei,BingQin,MingZhou,andTingLiu.2014.Buildinglarge-scaletwitter-specificsenti-mentlexicon:Arepresentationlearningapproach.InProceedingsoftheInternationalConferenceonCom-putationalLinguistics(COLING).DuyuTang,BingQin,andTingLiu.2015a.Learningse-manticrepresentationsofusersandproductsfordocu-mentlevelsentimentclassification.InProceedingsoftheAssociationforComputationalLinguistics(ACL).JianTang,MengQu,MingzheWang,MingZhang,JunYan,andQiaozhuMei.2015b.Line:Large-scalein-formationnetworkembedding.InProceedingsoftheConferenceonWorld-WideWeb(WWW).MikeThelwall.2009.HomophilyinMySpace.JournaloftheAmericanSocietyforInformationScienceandTechnology,60(2).MattThomas,BoPang,andLillianLee.2006.Getoutthevote:DeterminingsupportoroppositionfromCongressionalfloor-debatetranscripts.InProceed-ingsofEmpiricalMethodsforNaturalLanguagePro-cessing(EMNLP).PeterTrudgill.1974.Linguisticchangeanddiffusion:Descriptionandexplanationinsociolinguisticdialectgeography.LanguageinSociety,3(2).JosephB.WaltherandKyleP.D’Addario.2001.Theimpactsofemoticonsonmessageinterpretationincomputer-mediatedcommunication.SocialScienceComputerReview,19(3).ZhenWang,JianwenZhang,JianlinFeng,andZhengChen.2014.Knowledgegraphembeddingbytrans-latingonhyperplanes.InProceedingsoftheNationalConferenceonArtificialIntelligence(AAAI).RobertWest,HristoPaskov,JureLeskovec,andChristo-pherPotts.2014.Exploitingsocialnetworkstructureforperson-to-personsentimentanalysis.TransactionsoftheAssociationforComputationalLinguistics,2.CarolineWinterer.2012.WhereisAmericaintheRe-publicofLetters?ModernIntellectualHistory,9(03). l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / t a c l / l a r t i c e - p d f / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 0 6 2 1 5 6 7 4 5 8 / / t l a c _ a _ 0 0 0 6 2 p d . f b y g u e s t t o n 0 9 S e p e m b e r 2 0 2 3 307 FangzhaoWuandYongfengHuang.2015.Personal-izedmicroblogsentimentclassificationviamulti-tasklearning.InProceedingsoftheNationalConferenceonArtificialIntelligence(AAAI). l D o w n o a d e d f r o m h t t p : / / d i r e c t . m i t . e d u / t a c l / l a r t i c e - p d f / d o i / . 1 0 1 1 6 2 / t l a c _ a _ 0 0 0 6 2 1 5 6 7 4 5 8 / / t l a c _ a _ 0 0 0 6 2 p d . f b y g u e s t t o n 0 9 S e p e m b e r 2 0 2 3 308Transactions of the Association for Computational Linguistics, Bd. 5, S. 295–307, 2017. Action Editor: Christopher Potts. Bild

PDF Herunterladen