project report: amazon question-answer retrieval€¦ · semantic information about the sentence in...

12
Project Report: Amazon Question-Answer Retrieval Nikhil Mehta, Dhanasekar Sundararaman & Serge Assaad {nm208, ds448, sa229}@duke.edu Electrical and Computer Engineering Duke University 1 Introduction The goal of this project is to build and evaluate a simple question-answering system on the Amazon- QA dataset based on the sentence embeddings of questions and answers (i.e. without sequential structural information contained in sentences). Sentence embeddings are easy to use because they are quite fast to compute (often linear in number of words in the sentence), and they contain rich semantic information about the sentence in a compressed representation, as opposed to sequences of word embeddings which are often time-consuming to work with. By using a pre-trained architecture [1], we can embed question-answer sentence pairs and treat question answering as a retrieval problem whereby a user asks a question, and a trained system retrieves an existing response from an extensive training dataset, as opposed to a generation problem whereby the system is expected to generate a novel response, which is often technically challenging and time-consuming. We wish to evaluate the performance of these simpler retrieval methods on a question answering task by comparing 2 different approaches. We also introduce a method to retrieve only the relevant responses based on any side information (like reviews or description) that we may have. We first describe the dataset that we use for this retrieval task and then discuss a pre-existing model that our proposed method builds upon. In section 3, we discuss a baseline (Nearest neighbor) that we implement and discuss the limitations of that method. Following which we introduce our novel method based deep dual-semantic autoencoder for the Question-Answer retrieval task. In section 3.3, we show how we can leverage other side information (in our case product reviews), to improve the quality of answers retrieved. We evaluate our models by doing a quantitative and qualitative analysis in Section 4. 1.1 Dataset The Question-Answer (QA) retrieval model is based on the Amazon question-answering dataset [7], [5]. The data is carefully separated based on the category. There are 21 categories overall spanning more than 1.4 million answered questions. For this project, we use ‘Electronics’ category which comprises of about 350000 question-answer pairs. In Table 1, we summarize different topics (by running LDA) that are there in the dataset. We also have the Amazon reviews dataset which has around 7.8M reviews spanning over more than 500000 products of the category ’Electronics’. We can use these reviews as side-information while creating a QA retrieval system. The review data contains product id, images, review text data and many other features like review ratings etc. We are concerned only about the product id and the review text posted by users. 2 Background Universal sentence encoder: Our models are constructed on top of an existing architecture to generate sentence embeddings from text. Specifically, we generate embeddings for our question- answer pairs using Google’s Universal Sentence Encoder [1], which uses a Deep Averaging Network [3] (shown in Figure 2) to compute a 512-dimensional vector representation of a given tokenized

Upload: others

Post on 09-Aug-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Project Report: Amazon Question-Answer Retrieval€¦ · semantic information about the sentence in a compressed representation, as opposed to sequences of word embeddings which are

Project Report: Amazon Question-Answer Retrieval

Nikhil Mehta, Dhanasekar Sundararaman & Serge Assaad{nm208, ds448, sa229}@duke.eduElectrical and Computer Engineering

Duke University

1 Introduction

The goal of this project is to build and evaluate a simple question-answering system on the Amazon-QA dataset based on the sentence embeddings of questions and answers (i.e. without sequentialstructural information contained in sentences). Sentence embeddings are easy to use because theyare quite fast to compute (often linear in number of words in the sentence), and they contain richsemantic information about the sentence in a compressed representation, as opposed to sequences ofword embeddings which are often time-consuming to work with. By using a pre-trained architecture[1], we can embed question-answer sentence pairs and treat question answering as a retrieval problemwhereby a user asks a question, and a trained system retrieves an existing response from an extensivetraining dataset, as opposed to a generation problem whereby the system is expected to generate anovel response, which is often technically challenging and time-consuming. We wish to evaluatethe performance of these simpler retrieval methods on a question answering task by comparing 2different approaches. We also introduce a method to retrieve only the relevant responses based onany side information (like reviews or description) that we may have.

We first describe the dataset that we use for this retrieval task and then discuss a pre-existing modelthat our proposed method builds upon. In section 3, we discuss a baseline (Nearest neighbor) thatwe implement and discuss the limitations of that method. Following which we introduce our novelmethod based deep dual-semantic autoencoder for the Question-Answer retrieval task. In section 3.3,we show how we can leverage other side information (in our case product reviews), to improve thequality of answers retrieved. We evaluate our models by doing a quantitative and qualitative analysisin Section 4.

1.1 Dataset

The Question-Answer (QA) retrieval model is based on the Amazon question-answering dataset [7],[5]. The data is carefully separated based on the category. There are 21 categories overall spanningmore than 1.4 million answered questions. For this project, we use ‘Electronics’ category whichcomprises of about 350000 question-answer pairs. In Table 1, we summarize different topics (byrunning LDA) that are there in the dataset.

We also have the Amazon reviews dataset which has around 7.8M reviews spanning over more than500000 products of the category ’Electronics’. We can use these reviews as side-information whilecreating a QA retrieval system. The review data contains product id, images, review text data andmany other features like review ratings etc. We are concerned only about the product id and thereview text posted by users.

2 Background

Universal sentence encoder: Our models are constructed on top of an existing architecture togenerate sentence embeddings from text. Specifically, we generate embeddings for our question-answer pairs using Google’s Universal Sentence Encoder [1], which uses a Deep Averaging Network[3] (shown in Figure 2) to compute a 512-dimensional vector representation of a given tokenized

Page 2: Project Report: Amazon Question-Answer Retrieval€¦ · semantic information about the sentence in a compressed representation, as opposed to sequences of word embeddings which are

Table 1: Summary of the dataset using topic modeling. 10 sampled topics after running LDA with 25 topics.

Topic 1 Topic 2 Topic 3 Topic 4 Topic 5 Topic 6 Topic 7 Topic 8 Topic 9 Topic 10

Camera Light Remote Phone Speaker Tablet Memory 3D Year mmCard Picture Control Headphone Wire Ipad Game All One dimensionSD Image Watt Jack Receiver Screen GB Color Month includesVideo Screen Voltage Android Input Not Antenna Protector Old muchMemory Video Volt Use Output Samsung Internet Clear Last really

Figure 1: Deep Averaging Network [3] architecture

sentence. The encoder is trained on unsupervised data from Wikipedia, web news, web question-answer pages and discussion forums, and augmented with the supervised Stanford Natural LanguageInference (SNLI) corpus. By using the pre-trained Universal Sentence Encoder, we effectivelyoutsource most of the semantic processing of the sentence, then use the produced embeddings fordownstream computations which are much faster.

We define the question featureX ∈ RD and the response feature Y ∈ RD as the sentence embeddingsfrom the universal sentence encoder.

3 Model

We first introduce a simple yet strong baseline that can be used to solve the question-answeringproblem. We then present a novel method which does inference by learning a neural network basedmapping from the question latent space to the answer space. As we show in Section 4, the proposedmethod improves upon the baseline method. In section 3.3, we extend our system by introducinga method to add hard-attention to the predictions (i.e. considering only a subset of predictions)by leveraging any side-information available for products. We will describe subsequently that thehard-attention enriches the predictions made by the model endowing the model with the ability toselect relevant candidate answers.

3.1 Baseline - Nearest Neighbor (NN)

Nearest neighbor (NN) is a simple method to make predictions based on 1-nearest neighbor in thetraining set. Given N pairs of question-answer embeddings {X,Y }Nn=1, we define the similaritymatrix in the question space as ΣNN ∈ RNte×Ntr (Nte and Ntr are the number of testing andtraining samples). Note that the similarity matrix can be easily computed using ΣNN = XteX

Ttr,

where each row denotes the similarity of a testing question in test set Xte to Ntr training questions inXtr. The prediction is the answer to the nearest neighbor, i.e. Yte = Ytr[argmaxrow(ΣNN )].

2

Page 3: Project Report: Amazon Question-Answer Retrieval€¦ · semantic information about the sentence in a compressed representation, as opposed to sequences of word embeddings which are

Figure 2: The proposed Autoencoder

Note that the efficacy of NN based model is limited since the model does not consider any informationfrom the answer space. Another limitation of the method is that it can only be used for informationretrieval, and can not be used for other tasks like generating a response. As we show in the nextsection, these limitations can be alleviated by learning a mapping from the question space to theanswer space.

3.2 Semantic Dual Autoencoder

We now present our semantic dual autoencoder (SD-AE) for the QA system. The proposed architec-ture, depicted in Figure 3.2, learns two latent embeddings Sx ∈ RL for questions and Sy ∈ RL foranswers. Specifically, we define Sx and Sy as follows:

Sx = WTx→sX Sy = WT

y→sY (1)

Xrec = WTs→xSx Yrec = WT

s→ySy (2)

where Wx→s ∈ RD×L,Wy→s ∈ RD×L,Ws→x ∈ RL×D,Ws→y ∈ RL×D . The output Xrec and Yrec are therespective reconstructions of X and Y .

As stated in section 3.1, we learn a mapping from the latent embedding of the question space tothe latent embedding of the answer space. In particular, we parameterize the mapping using θ suchthat Sy = θSx, where θ ∈ RL×L is the mapping matrix. For simplicity, we depicted only a linearmapping between the sentence features and the latent embeddings here. In practice, we used a deepnon-linear neural network for encoding, decoding and the latent mapping (θ). The architecture detailssuch as the number of layers, type of non-linearity is included in the Section 4.

3.2.1 Training the parameters

The training procedure for the proposed architecture is similar to the traditional autoencoder. LetW = {Wx→s,Wy→s,Ws→x,Ws→y}. We can learn W using the following loss formulation:

LW = d(X,Xrec) + d(Y, Yrec); where d(A,B) =A ·B||A|| ||B||

(3)

To learn the parameters θ, we first generate Ypred from X in the following way:

Sy = θWTx→sX (4)

Ypred = WTs→ySy (5)

We define the loss function for the parameters θ as follows:

Lθ = d(Y, Ypred) (6)

Note that while learning the parameters θ, we freeze the weights W . This ensures that the twoautoencoder architectures do not affect the Q→A, (i.e. the mapping from question to answer is onlygoverned by the parameters θ).

3

Page 4: Project Report: Amazon Question-Answer Retrieval€¦ · semantic information about the sentence in a compressed representation, as opposed to sequences of word embeddings which are

3.2.2 Inference during test phase

During the test phase, given a question query, we first generate the sentence embedding X usingthe Google’s universal sentence encoder, and use Eq. (4-5) to get the predicted feature Ypred for thequestion.

Note that when the model is trained, Ypred ideally represents the semantic information of the correctresponse. Unlike the NN model, having access to the predicted embedding Ypred can allow us to doother downstream tasks on the response space (like semi-supervised classification, clustering, etc).This embedding can also be used to generate a response using a various genertaive models such asa VAE [4], GAN [2] or an autoregressive model. However, that is not the focus of our work. Toevaluate and compare our model we find the embedding closest to Ypred from the training responseset Ytr. It is important to note, that even though we are using nearest neighbor in the last step todo the inference, this is not equivalent to the NN model in Section 3.1. As we show in subsequentsection, the embeddings learned using SD-AE are more expressive and have better performance overthe baseline NN.

3.3 Relevant Products Engine (TF-IDF Model)

A retrieval method based only on semantic similarity of text embeddings may not give contextuallycorrect answers. For instance, in our case, there may exist similar questions for completely differentproducts in the dataset. To alleviate this issue, we introduce a relevant-product-engine which canleverage side-information in the form of reviews present in the Amazon reviews dataset.

The relevant-product-engine finds the k-most relevant products given a query product. The relevancyis measured based on the reviews of the products. This module has three main components:

• Preprocessing the review data.• TF-IDF model for topic modeling.• Cosine Similarity to find relevant products.

3.3.1 Pre-processing the review data

The review data is noisy at times with a lot of words which isn’t helpful in estimating the topics.Hence, to improve the efficiency of the model, pre-processing proves to be imperative. We performstop word removal, punctuation removal and lemmatization on the corpus.

3.3.2 TF-IDF model for topic modeling

After pre-processing the entire corpus, the TF-IDF model is applied to analyze the topics of eachproducts based on reviews text data. This analysis helps to weed out irrelevant words in review textswhich is not representative of the sentiments or contents the review conveys.

Term Frequency also known as TF measures the number of times a term (word) occurs in a document.In reality each document will be of different size(Here each document refers a particular product’sreview data). On a large document the frequency of the terms will be much higher than the smallerones. Hence we need to normalize the document based on its size. Hence, we divide the termfrequency by the total number of terms.

tfidfi,d = tfi,d · idfi (7)

Here ’i’ denotes each word in the corpus where (i,d) denotes a word in a particular product review data.the importance of a word in the entire corpus is captured by IDF while TF captures its importance in aparticular product review through its frequency. The dot product of this gives the relative importanceof a word in a review over the other words present.

3.3.3 Cosine Similarity to find relevant products

Cosine Similarity proves to be an efficient metric to find relevant products, given a query productowing to its ease in operating with vectors. The entire corpus is broken down into a word vectorsignifying the importance of each word which Cosine distance metric uses to its advantage.

4

Page 5: Project Report: Amazon Question-Answer Retrieval€¦ · semantic information about the sentence in a compressed representation, as opposed to sequences of word embeddings which are

4 Results and Discussions

4.1 Quantitative Results:

Table 2 provides the accuracy of NN/AE model using Bleu score metric. Bleu score estimatesaccuracy by matching n-grams between a list of references and a candidate pairs [6]. Bleu scoreacts as an efficient measure to measure accuracy of our model than Precision and Recall at ’k’. ThePrecision and Recall metrics are strict positional and word dependent. Even when the actual essenceof an answer is predicted, Precision and Recall penalizes the model if the words are not exactlyordered unlike Bleu score which depends on n-grams. The table also shows the accuracy of the twomodels with and without Relevant Products Engine.

The test data consists of a set of questions and answers. For predictions without engine, the listof references consists of nearest neighbours of the actual answer to the test question while thecandidate is the predicted answer. For predictions with engine, the list of references consists ofnearest neighbours of the actual answer which is present in the relevant products given by the engineand the candidate is the closest predicted answer.

We can observe that the accuracy is higher for the Auto encoder model than the nearest neighbours.The accuracy fades away naturally as the ’n’ in n-grams is increased. But, it does not decrease muchin Predictions with engine as it does for the one without it. This tells us that the relevance factor inthe model retains the accuracy even when the bleu score is evaluated for higher n-grams.

Table 2: Comparison of Bleu(n-gram) scores.

Model Predictions Bleu1 Bleu2 Bleu3 Bleu4

Nearest Neighbor w/o Engine .6614 .5788 .4675 .3662SD-Autoencoder .7107 .6186 .5006 .3959

Nearest Neighbor w/ Engine .6851 .6436 .5825 .5167SD-Autoencoder .7839 .7423 .6838 .6245

4.2 Nearest Neighbor Results

In Table 3, we show the nearest neighbors that the baseline model described in 3.1 finds.

Table 3: The nearest neighbors (NN) of the query found in the training samples using similarity as defined inSection 3.1.

Query. 1 can the hard drive be upgraded? if so how many mm tall can a upgraded hdd be?

NN-1 Can the HDD be upgraded and how large? Will it take SSD HDD?NN-2 Hi, I was wondering if the 500GB Hard drive on this computer can be upgraded, and if so what....NN-3 Can the hard drive be upgraded?

Query. 2 dose this work with xbox 360 with the mic and such with out trouble

NN-1 does it work with a xbox 360NN-2 Does this work with xbox 360?NN-3 Does it work with XBox 360?

Query. 3 Does it have focus peaking?

NN-1 Is it Auto focus?NN-2 can i use focus peaking with this lensNN-3 Does the camera focus well and easily?

4.3 Qualitative results of the relevant-product-engine

The following figure gives a sample of products and their relevant products using Relevant ProductsEngine.

5

Page 6: Project Report: Amazon Question-Answer Retrieval€¦ · semantic information about the sentence in a compressed representation, as opposed to sequences of word embeddings which are

Figure 3: The figure shows the qualitative analysis of the relevant product engine. The first column isthe query product and the columns 2-6 are the top relevant products found using the TF-IDF engine.

The first column in the image is the actual/ baseline image given to the Relevant Products Engine.The next 5 columns represent the top 5 relevant products found out using the engine which wasexplained in Section 3.3.

Figure 4: The figure shows the similarity score obtained of top 100 products for each of the productshown in the first column of Fig. 3 .

Figure 4 shows how similarity is decreasing as we increasingly move along the lines of ’k’ in top’k’ products. A detailed example on how efficiently the relevant products engine extracts the mostrelevant products is shown in Appendix B.

4.4 Qualitative Results:

In this subsection, we take 3 actual question and answer pairs from the Amazon data set and comparethem with the ones given out by our model for the following cases.

6

Page 7: Project Report: Amazon Question-Answer Retrieval€¦ · semantic information about the sentence in a compressed representation, as opposed to sequences of word embeddings which are

1. Nearest Neighbours without Relevant Products Engine.2. Auto Encoder without Relevant Products Engine.3. Nearest Neighbours with Relevant Products Engine.4. Auto Encoder with Relevant Products Engine.

The corresponding products image is attached to differentiate the products from which the NearestNeighbours and Auto Encoders chooses questions with and without the Relevant Products Engine.

Figure 5: In this figure we demonstrate the relevance of the predictions made by both the models (NNand SD-AE). Each row represents a single test case. Column 1 is the query product (for which thequestion was asked). Column 2 to Column 5 are the corresponding products of the answers retrievedby the models. Column 2 and Column 3 were retrieved by the NN and AE model respectively withoutusing the relevant product engine. Column 4 and Column 5 depict the retrieved products by NN andAE models when endowed with the relevant product engine.

1. Actual Question: Does anyone know if you can use 2 headsets togetherfrom the same receiver?

Actual Answer: I believe the question should be can you use 2 headsetsfrom the same transmitter. The headset itself is a receiver and the unitthat plugs into the audio of the TV is a transmitter. That being said, Idon’t know how you might obtain a headphone without buying the entire set.Also be aware that Sony has developed a very bad reputation as far backingup their product warranty.

Model output answers:Nearest Neighbours without Engine:

Answer: This would work with any Bluetooth device as long as os cansupport the driver for the connected Bluetooth device. Dongle I’d justthe connection bridge, computer must be able to recognize the connecteddevice

Auto Encoder without Engine:

Answer: It works with any normal headphones. The 2 heads connect to yourairplane seat, then just plug your headphone to the adapter. I don’t knowanything about the M100 but my M80 work fine, as would any 3.5mm jack

Nearest Neighbours with Engine:

7

Page 8: Project Report: Amazon Question-Answer Retrieval€¦ · semantic information about the sentence in a compressed representation, as opposed to sequences of word embeddings which are

Answer: Yes, it was delivered and my husband put it together with noproblems and used it that night. Blessing for both of us

Auto Encoder with Engine:

Answer: I’m not 100 percent sure but I think so. It can definitely use 2so I don’t see why it won’t support a third. The headphones are basicallyreceivers, the little box is a transmitter. I have to say they work prettywell. I can recommend. Hope this helps

2. Actual Question: does it have ’avoid highways’ like tomtom has?

Actual Answer: Yes it does. It has 4 choices! Fastest Way, Shortest Way,Most Freeways and Least Freeways. It is AWESOME

Model output answers:Nearest Neighbours without Engine:

Answer: Yes. Very detailed. And does turn by turn with audible prompts.Another neat feature is round trip generation, tell it how far you want toride and it comes up with 2 or 3 possible routes for you

Auto Encoder without Engine:

Answer 2: Yes. It has gps

Nearest Neighbours with Engine:

Answer: Yes, when you choose "least use of freeways"

Auto Encoder with Engine:

Answer: It does come with maps; however, they may or may not be thelatest. Garmin does allow you to update the maps once for free, but itdoes require a standard USB to mini-USB cable which is not provided. Formore information visit the Garmin website.

3. Actual Question: Does this case have the "rubber" feel to it? Or isit strictly smooth plastic?

Actual Answer: Nope! Hard plastic and fantastic! It’s crazy how thin itis and how well it snaps into place.

Model output answers:Nearest Neighbours without Engine:

Answer: it does have a pretty decent rubberized feel to it.

Auto Encoder without Engine:

Answer: Yes it does! It’s a fairly thick, rubber material but no screenprotector.

Nearest Neighbours with Engine:

Answer: It feels like plastic but they’re amazing, easy to put on and takeoff.

Auto Encoder with Engine:

Answer: No clue. But the flimsy filmy protector is not great. I used.Initially, but when you open the cover the protector would sometime stickto the screen. It fit fine„ but adherence was basically the stickiness ofthe membrane itself, which over time gets dust stuck on it. Don’t bother.

From the analysis, we can see that the NN/AE model is better able to retrieve relevant answersfor questions with Relevant Products Engine. More answers predicted by the model is shown inAppendix A.

8

Page 9: Project Report: Amazon Question-Answer Retrieval€¦ · semantic information about the sentence in a compressed representation, as opposed to sequences of word embeddings which are

5 Conclusion and Future work

We have presented 3 retrieval methods for question answering, which have the advantage of usingsentence embeddings from the Universal Sentence Encoder, which makes downstream computationmuch faster, since the embeddings are compressed representations of the questions and answers.As a next step, we could explore generative approaches (e.g. GAN, VAE) to generate sentences.Specifically, we could use our SD-AE architecture to generate predicted response embeddings, thenfeed them through a decoder to generate a sentence. This would allow for generation of novelresponses instead of retrieving from the existing training data.

9

Page 10: Project Report: Amazon Question-Answer Retrieval€¦ · semantic information about the sentence in a compressed representation, as opposed to sequences of word embeddings which are

References[1] Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant,

Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil.Universal sentence encoder. CoRR, abs/1803.11175, 2018.

[2] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, AaronCourville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D.Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages2672–2680. Curran Associates, Inc., 2014.

[3] Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daumé III. Deep unordered compositionrivals syntactic methods for text classification. In Association for Computational Linguistics, 2015.

[4] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114,2013.

[5] Julian McAuley and Alex Yang. Addressing complex and subjective product-related queries with cus-tomer reviews. In Proceedings of the 25th International Conference on World Wide Web, pages 625–635.International World Wide Web Conferences Steering Committee, 2016.

[6] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation ofmachine translation. In Proceedings of the 40th annual meeting on association for computational linguistics,pages 311–318. Association for Computational Linguistics, 2002.

[7] Mengting Wan and Julian McAuley. Modeling ambiguity, subjectivity, and diverging viewpoints in opinionquestion answering systems. In Data Mining (ICDM), 2016 IEEE 16th International Conference on, pages489–498. IEEE, 2016.

10

Page 11: Project Report: Amazon Question-Answer Retrieval€¦ · semantic information about the sentence in a compressed representation, as opposed to sequences of word embeddings which are

6 Supplementary

6.1 Appendix A

Question 1 NN without RPE Answer 1: No I was never able to get two headphonesto work at one time without a splitter. We actually bought cords to connect thembecause they worked better. The wireless aspect didn’t work out for my family.

Answer 2: This would work with one at a time - as in, you’d have to buy two of the1/4" cables for both auxiliary and microphone. The included splitter only splitsfor two outputs, not one output and one input. Sorry yo!

AE without RPE Answer 1: It does come with an adapter then splits out intostandard right/left rca audio (female) so that should probably suffice with mostapplications. I never personally tried them on a tv. Otherwise its a 1/8th audioplug you would find on any pair of headphones etc.

Answer 2: Yes, they use a standard 3.5mm headphone jack for speakers andmicroohone. Make sure you have those two ports, usually green and pink, but theywill be for audio out and mic in, these are not USB. Great sound too

NN with RPE

Answer 1: We bought two headsets - each comes with its own optical connector wire.We then bought an optical splitter box (wasn’t too costly) and one extra opticalcord (tiny wire). We hooked the one optical cord to the TV "optical out" and thesplitter box "in." We then hooked each headphone’s optical cord connector intothe two "outs" on the splitter box. Both headphones sound FABULOUS! Hope thishelps. As far as I know, you cannot piggyback another set of headphones, as each"tower" (which is also the recharging unit for the headphones) connects to its ownindividual signal

Answer 2: No, it does not

AE with RPE

Answer 1: If your bass amp has a 1/4 headphone jack then these should work justfine.

Answer 2: Yes - those devices have the standard headphone jack and these headphoneswill work fine with them

Question 2

NN without RPE

Answer 1: Yes, gives you both.. GPS is actually quite good, didn’t expect that.But I guess finding updated maps is not easy? If I ever run into anything thatwon’t navigate then I just go to my Galaxy S4 with Waze or whatever... It worksalmost all the time for me. It’s a very nice unit for the price.. I was a littleskeptical on a PA system like this but the sound is very good, bluetooth worksflawless (phone and streaming Pandora, Slacker, music player, etc), playing movies(AVI mostly).. Video quality is excellent. Most pro installers will charge you alot to put this in because they are not use to it, plain and simple. It’s not hardat all but I would recommend installing yourself. I had to cut a slightly largerhole where my old Pioneer Avic-1 was installed.

Answer 2: Yes, when you choose "least use of freeways"

AE without RPE

Answer 1: Yes. But if you want to be sure, go to the Garmin Web site and selectyour Garmin unit with the map to see if it is compatible

Answer 2: Yes, it has GPS

NN with RPE

Answer 1: Yes it does i use it all the time for that

Answer 2: Yes. Under SETTINGS on the Main Menu, there is a NAVIGATION selectionthat allows you to set route preferences. From there, you can select AVOIDANCES

11

Page 12: Project Report: Amazon Question-Answer Retrieval€¦ · semantic information about the sentence in a compressed representation, as opposed to sequences of word embeddings which are

which includes: U-Turns, Highways, Tolls and Fees, Traffic, Ferries, Carpool Lanes,and Unpaved Roads.

AE with RPE

Answer 1: No Wifi access is required. It uses GPS satellites for mapping anddirections. There was no stand included in the item that I bought.

Answer 2: I’m not a good source for to answer your question. I sporadically accessMy Garmin to check for system and map updates and that’s about it. I can onlywonder if what you seek is available for a price from Garmin. You might check outeither their website or possibly via your My Garmin app.

Question 2

NN without RPE

Answer 1: It feels like plastic but they’re amazing, easy to put on and take off.

Answer 2: It feels like a hard case kind of in the middle between a hard plasticcase and hard rubber It isn’t shiny.

AE without RPE

Answer 1: It’s a soft, durable vinyl and or rubber. It offers some protection,but I would be concern about the keyboard circuit should it be dropped on a hardsurface. I have had no problems with my unit.

Answer 2: Yes, it’s a sturdy elasticized loop on the side of the case. The stylusstays put, no slipping. Works just fine!

NN with RPE

Answer 1: it does have a pretty decent rubberized feel to it.

Answer 2: The top cover is hard, smooth, see through plastic. The bottom hasmultiple slots for ventilation. The keyboard cover feels and types like theoriginal keyboard. We love it and the color! Hope this helps.

AE with RPE

Answer 1: No I think you could remove it pretty easily if you want to. In myexperience with cases a bigger problem is finding one that will stay on tightly andnot break. The plastic in the Snugg case is a bit softer and less brittle than myprevious Incase. I recommend it without reservation.

Answer 2: It’s a hard plastic case that can slide easily, but from my experience ithasn’t!

6.2 Appendix B

Figure 6: The figure shows the similarity score obtained of top 100 products for each of the productshown in the first column of Fig. 3 .

The query product is a ’Sony Lens" with certain Lens specification. The top 3 products, have almost exact Lensspecifications and features. The fourth product is a "Sony" branded Lens with a different Lens size.

12