<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0">
<channel>
<title>Ask Ghassem - Recent questions tagged sentiment</title>
<link>https://ask.ghassem.com/tag/sentiment</link>
<description>Powered by Question2Answer</description>
<item>
<title>Binary Classification and neutral tag</title>
<link>https://ask.ghassem.com/978/binary-classification-and-neutral-tag</link>
<description>&lt;p&gt;I am trying to create a sentiment analysis model using binary classification as loss.I have a batch of tweets that some of them are tagged as positive (labeled as 1)&amp;nbsp;and&amp;nbsp;negative (labeled as 0).I manage to gather some tweets that are tagged as neutral but there are less&amp;nbsp; tweets than positive and negative.My thinking is to tag them with 0.5 to balance the classification probability.Is this legit?&lt;/p&gt;

&lt;div id=&quot;gtx-trans&quot; style=&quot;position: absolute; left: 460px; top: 54px;&quot;&gt;
&lt;div class=&quot;gtx-trans-icon&quot;&gt;&amp;nbsp;&lt;/div&gt;
&lt;/div&gt;</description>
<category>Deep Learning</category>
<guid isPermaLink="true">https://ask.ghassem.com/978/binary-classification-and-neutral-tag</guid>
<pubDate>Sat, 30 Jan 2021 10:08:01 +0000</pubDate>
</item>
<item>
<title>My GloVe word embeddings contain sentiment?</title>
<link>https://ask.ghassem.com/972/my-glove-word-embeddings-contain-sentiment</link>
<description>&lt;p&gt;I&#039;ve been researching sentiment analysis with word embeddings. I read papers that state that word embeddings ignore sentiment information of the words in the text. One paper states that among the top 10 words that are semantically similar, around 30 percent of words have opposite polarity e.g. happy - sad.&lt;/p&gt;

&lt;p&gt;So, I computed word embeddings on my dataset (Amazon reviews) with the GloVe algorithm in R. Then, I looked at the most similar words with cosine similarity and I found that actually every word is sentimentally similar. (E.g. beautiful - lovely - gorgeous - pretty - nice - love). Therefore, I was wondering how this is possible since I expected the opposite from reading several papers. What could be the reason for my findings?&lt;/p&gt;

&lt;p&gt;Two of the many papers I read:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Yu, L. C., Wang, J., Lai, K. R. &amp;amp; Zhang, X. (2017). Refining Word Embeddings Using Intensity Scores for Sentiment Analysis. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(3), 671-681.&lt;/li&gt;
&lt;li&gt;Tang, D., Wei, F., Yang, N., Zhou, M., Liu, T. &amp;amp; Qin, B. (2014). Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, 1: Long Papers, 1555-1565&lt;/li&gt;
&lt;/ul&gt;</description>
<category>General</category>
<guid isPermaLink="true">https://ask.ghassem.com/972/my-glove-word-embeddings-contain-sentiment</guid>
<pubDate>Sun, 03 Jan 2021 14:09:37 +0000</pubDate>
</item>
</channel>
</rss>