NoSQL Databases

137

Solutions

267

Contributors

A NoSQL database provides a mechanism for storage and retrieval of data which is modeled in means other than the tabular relations used in relational databases. Motivations for this approach include: simplicity of design, simpler "horizontal" scaling to clusters of machines and finer control over availability. The data structures used by NoSQL databases (e.g. key-value, wide column, graph, or document) are specified from those used by default in relational databases, making some operations faster in NoSQL. Sometimes the data structures used by NoSQL databases are also viewed as "more flexible" than relational database tables.

Share tech news, updates, or what's on your mind.

Sign up to Post

How do MongoDB and Active Directory go together?

I have a small bit of exposure to MongoDB and  Active Directory as well. But is there so synergy of using them together?

Curious..

Thanks
0
Introducing Cloud Class® training courses
LVL 12
Introducing Cloud Class® training courses

Tech changes fast. You can learn faster. That’s why we’re bringing professional training courses to Experts Exchange. With a subscription, you can access all the Cloud Class® courses to expand your education, prep for certifications, and get top-notch instructions.

Why are NoSQL databases so much faster than SQL databases?

I am curious about the steps needed todo something like insert a record for NoSQL versus SQL databases

I have never needed to understand what was under the covers on SQL Server, or any database. So I am curious about what step are simply removed by NoSQL

Thanks
0
hi,

how DB2 work with MongoDB ?
0
hi,

what is the solution to integrate Oracle with Mongo DB? data flow between them ?
0
hi,

anyone know how to intergrate MariaDB and Mongo DB so that they work together well ?

how about MariaDB and hadoop?
0
hi,

anyone know how to intergrate MS SQL and Mongo DB so that they work together well ?

how about MS SQL and hadoop?
0
What are the differences between a traditional RDBMS and a NoSQL database?
0
Mongo vs. Postgres DB.  Can you anyone pointed pros and cons?  And from the personal experience advice?
0
Hello. I am working on a node application.  I'm trying to use Passport for the log in.   I found an example online of what I want to do.  However I cannot get it to work.  I am running into an issue with my routes.  I get a compiler error that I have not seen before.  I tried to attached the project to this question, but it won't accept zip files so I will just attach my index.js and AuthController.js.  This is the error I get on index.js when it tries to do my first route.
 
var auth = require("../controllers/AuthController.js");

// restrict index for logged in user only
router.get('/', auth.home);

Open in new window

I get this error:  
index.js:349
throw new mongoose.Error.OverwriteModelError(name);
OverwriteModelError: Cannot overwrite `User` model once compiled.
at MongooseError.OverwriteModelError (C:\Users\ernest\Documents\coding bootcamp\code\Passport MongoDB\node-passport-auth\node_modules\mongoose\lib\error\overwriteModel.js:18:11)
    at Mongoose.model (C:\Users\ernest\Documents\coding bootcamp\code\Passport MongoDB\node-passport-auth\node_modules\mongoose\lib\index.js:349:13)
    at Object.<anonymous> (C:\Users\ernest\Documents\coding bootcamp\code\Passport MongoDB\node-passport-auth\models\user.js:12:27)
    at Module._compile (module.js:570:32)
    at Object.Module._extensions..js (module.js:579:10)
    at Module.load (module.js:487:32)
    at tryModuleLoad (module.js:446:12)
    at Function.Module._load (module.js:438:3)
    at Module.require …
0
hi,

any  tools/idea you all use to synchronization between traditional RDMS and NoSQL in MS SQL and maria DB and noSQL?
0
Cloud Class® Course: SQL Server Core 2016
LVL 12
Cloud Class® Course: SQL Server Core 2016

This course will introduce you to SQL Server Core 2016, as well as teach you about SSMS, data tools, installation, server configuration, using Management Studio, and writing and executing queries.

I want to build or buy a Mobile app for use in the field.  It needs to have the following:

  • Name
  • Phone
  • Email
  • Opt In Y or N
  • Question 1
  • Questions 2
  • Question 3
  • Save to DB if online
  • Save to Local or Cache for sync when offline
  • Send SMS to Survey giver that their submission has been completed

No registration needed for survey giver.  The survey will be given by internal employees.

I have found several paid versions that are very expensive.  I considered building:
  • Mobile App
  • SMS Gateway
  • Hosted in Cloud
  • Cloudbase Moblie? for offline

Any thoughts?
0
I am about to start to play around with the Bitcoin open source, source code, and got introduced to a new term: a Blockchain database.

What technologies are needed for this? Is this NO-SQL?

Thanks.
0
Hello All,

I am reading a book called express.js Blueprints.  I am trying to wrap my mind around understanding authentication using passport.  serializing and deserializing is not registering to me.  I have just started learning node and express js so that's a big reason why.

Here's a code from the book on setting up passport.  Starting with line 5, can someone please break down what's happening?  Where is the "user" parameter coming from in the serializeUser function?  Where did "user.id" come from?

var passport = require('passport');
var LocalStrategy = require('passport-local').Strategy;
var User = require('mongoose').model('User');

passport.serializeUser(function(user, done) {
done(null, user.id);
});
passport.deserializeUser(function(id, done) {
User.findById(id, done);
});

passport.use(new LocalStrategy(function(email, password, done) {
User.findOne({
email: email
}, function(err, user) {
if (err) return done(err);
if (!user) {
return authFail(done);
}
if (!user.validPassword(password)) {
return authFail(done);
}
return done(null, user);
});
}));

Open in new window

0
Hi,

Im new to CB and I have the following documents that have these fields: (among others)
  "_class": "com.wrapper",
  "dbProfile": {
         "operatorId": "3026",
          "profileType": "BEL_PR_LT"
}

I have several combinations of the pair dbProfile.operatorId/dbProfile.profileType among the documents but all have the _class the same.

I wanted to run a query that would display the all the possible combinations of dbProfile.operatorId/dbProfile.profileType only once (even if it appears more than once  - DISTINCT in SQL).

How would this look like in a CB query format? My bucket name is apoiu.

Tks,
J
0
I am trying to connect to MongoDB in mongoDBatlas from my javascript but I keep getting the following error.

MongoError: connection 5 to isaaccluster-shard-00-02-yng8g.mongodb.net:27017 closed
    at Function.MongoError.create (C:\Users\558642\ga\js-dc-5\11-crud-and-dbs\assignment\todo\node_modules\mongodb-core\lib\error.js:29:11)
    at TLSSocket.<anonymous> (C:\Users\558642\ga\js-dc-5\11-crud-and-dbs\assignment\todo\node_modules\mongodb-core\lib\connection\connection.js:202:22)
    at Object.onceWrapper (events.js:293:19)
    at emitOne (events.js:101:20)
    at TLSSocket.emit (events.js:191:7)
    at _handle.close (net.js:511:12)
    at Socket.done (_tls_wrap.js:332:7)
    at Object.onceWrapper (events.js:293:19)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:191:7)
    at TCP._handle.close [as _onclose] (net.js:511:12)


Here's my code
const express = require('express')
const hbs = require('express-handlebars')
const mongoose = require('mongoose')
const bodyParser = require('body-parser')

mongoose.connect('mongodb://XXXXXXXXXXXX@isaaccluster-shard-00-00-yng8g.mongodb.net:27017,isaaccluster-shard-00-01-yng8g.mongodb.net:27017,isaaccluster-shard-00-02-yng8g.mongodb.net:27017/<DATABASE>?ssl=true&replicaSet=IsaacCluster-shard-0&authSource=admin')

const itemEntry = require('./models/toDoEntry.js')
const app = express()

app.get('/', function( req, res ) {
	itemEntry.find({}, function( err, itemEntries ) {
			res.render('todoList',

Open in new window

0
Hi there,

I have a mongo aggregation query that works fine in the mongo shell (and robomongo) but I can not work out how to translate this into a PHP query.

I am using PHP 5.6 with the latest mongo class (MongoDB\Driver\Query).  The mongo query looks like this:

db.products.aggregate(
   [
     {$match: {
             vendor_name : "vendor8",
             distributor_id : 8
         }
     },    
     { $sort: { 
         cw_product_code: 1, download_Date: 1 
         } 
     },
     { $group:
         {
           _id: "$cw_product_id",
           lastDownloadDate: { $last: "$download_Date" },
         }
     }        
   ],
     {allowDiskUse: true}     
)

Open in new window


Any help to point me in the right direction would be appreciated.
0
hi, i have the following document on mongodb

name: "john",
state: "GA",
city: [
     {"atlanta", 30350},
     {"atlanta", 30351},
     {"atlanta", 30352},
     {"marietta", 45093}
]


how do i aggregate the array of citys and get a document like this:

name: "john",
state: "GA",
city: [
     {"atlanta", "30350, 30351, 30352"},
     {"marietta", "45093"}
]
0
Hello there,

I am a java developer and am very new to Linux and Cassandra. I am using Ubunto 16.04 and have installed cassandra v3.9.0 from datastax site. Now when i run the bin/cqlsh cmd from the cassandra directory i get error
No appropriate python interpreter found.
.

TO make sure I have python i tried this cmd python -v and then i get this message
The program python ca nbe found in the folowing packages:

Do i have python by default??? Please help.

cheers
Zolf
0
Hello  Experts,

I would like to know if the approach below can help us building an  online application where  One can upload Videos and comments from a   registered user.  we have choosen  rdbms like  mysql and no sql database like Mongo db database. In the rdbms we would be adding   Master data  such as registered table, country  And region   which would be static . The transactional data such as uploading the video, comments  etc in the No sql database like mongo db, With this approach are we going to  find a better managing the data or any other approach which yield in better result.. Infrastructure wise
we will be deploying in the AMAZON WEB SERVICE , idea is to have where agility, performance, and scalability reign supreme.



Thanks


Roy,,,
0
Keep up with what's happening at Experts Exchange!
LVL 12
Keep up with what's happening at Experts Exchange!

Sign up to receive Decoded, a new monthly digest with product updates, feature release info, continuing education opportunities, and more.

Hi,

Assuming i have an parent class that I filter on various properties, one of which is a property that is an array of items . Now say that i want to only return the parent item if my array of items as above a min value and below a max value ...that's fine i can work that bit out; What if i then want to then sort on the filtered result set of those items

I made a c# fiddle example to show what im trying to achieve : https://dotnetfiddle.net/mV4d28  (note that foo2 is returned first even though foo1 has items in its array that are less that those in foo2)

As i need to do this using a index i need the index to be able to compute the order by based on the filter criteria used in my query.

I know elasticsearch has an inner hits function that dose this and mongo has pipelines which also dose this so im sure Raven must have a way of doing this too ?

I was hoping using just index and a transform with prams i could achieve this so I tried it:

my index and transform look like this

public class familyTransfrom : AbstractTransformerCreationTask<ParentItem>
{
    public class Result : ParentItem{
        public double[] ChildItemValuesFiltered { get; set; }
    }
    public familyTransfrom(){
        TransformResults = parents => from parent in parents
        let filterMinValue = Convert.ToDouble(ParameterOrDefault("FilterMinValue", Convert.ToDouble(0)).Value<double>())
        let filterMaxValue = 

Open in new window

0
I am currently trying to run a script that I have written however it constantly hangs halfway through, this has worked on the odd occasion but more often than not it gets nowhere. Here is the script:

:: load metadata only

impdp  *username*/******@Server1 exclude=user REMAP_SCHEMA=SC_MUTBLDN:SC_MUTBLDN schemas=SC_MUTBLDN CONTENT=METADATA_ONLY dumpfile=ALL_METADATA_DAILY.dmp logfile=ALL_METADATA_DAILY.log



:: load data only

impdp  *username*/******@Server1  Full=Y EXCLUDE=TABLE:"LIKE'CRAC%'"  dumpfile=ALL_DATA_DAILY.DMP table_exists_action=replace  logfile=ALL_DATA_DAILY.log

impdp  *username*/******@Server1  EXCLUDE=TABLE:"LIKE'CRAC%'"  REMAP_SCHEMA=SC_MUTBLDN:SC_MUTBLDN   dumpfile=ALL_DATA_DAILY.DMP table_exists_action=replace  logfile=ALL_DATA_DAILY.log


:: dump data minus crac tables

impdp *username*/******@Server1 tables=CRAC_TYPE dumpfile=CRAC_TYPE.dmp table_exists_action=replace logfile=CRAC_TYPE.log
impdp *username*/******@Server1 tables=CRAC_KEYWORD dumpfile=CRAC_KEYWORD.dmp table_exists_action=replace logfile=CRAC_KEYWORD.log
impdp *username*/******@Server1 tables=CRAC_DEALTYPE dumpfile=CRAC_DEALTYPE.dmp table_exists_action=replace logfile=CRAC_DEALTYPE.log
impdp *username*/******@Server1 tables=CRAC_TABLENAME dumpfile=CRAC_TABLENAME.dmp table_exists_action=replace logfile=CRAC_TABLENAME.log
impdp *username*/******@Server1 tables=CRAC_DATABASENAME dumpfile=CRAC_DATABASENAME.dmp table_exists_action=replace logfile=CRAC_DATABASENAME.log
0
I have a 3 node datastax cassandra(Community) cluster with huge data. I have few tables which contain 3-5 billion records in them. I want to delete data that is older than 90 days from those tables.

The problem is how do i run a select query which runs without timeout. I am currently running below query

NOW=$(date -d "-3 month" +"%Y-%m-%d")
select day_ts from table_name where minute_ts < '$NOW' LIMIT 100000 ALLOW FILTERING;


Even if i limit the select query result, it will still parse the whole 3-5 billion records and then filter the data.

Please suggest what can be a efficient way to do this.
0
Is MarkLogic v 5 FIPs compatible/enabled? I can't find anything online for this version.
0
I have a financial project which receive real time stock data from some data vendor , save it into mysql database, then retrieve the data and send to the end user browser. The client software provided by the data vendor used to receive stock data is a program written by c/c++ running on the server.  this client can save the data into the mysql database(does not have to be mysql, could be switched to any other database). In order to retrieve the data from the database as quickly as possible, any framework can I use? heard about CES or ESP? spark streaming? any of them can be used for my project?  if not, how can I only retrieve the un-read data from the database as soon as it reach the database? the stock data feed is probably about maxium1000 records(my wild guess, might not be correct)  a second. see the sample below.

+---------------------+--------+-------------------+-------------+
| insertTime                  | symbal | trade_time                 | trade_price |
+---------------------+--------+-------------------+-------------+
| 2016-09-15 04:00:00 | AAPL   | 20160915040000017 |      111.70 |
| 2016-09-15 04:00:00 | AAPL   | 20160915040000017 |      111.70 |
| 2016-09-15 04:00:00 | AAPL   | 20160915040000200 |      111.69 |
| 2016-09-15 04:00:00 | AAPL   | 20160915040000200 |      111.69 |
| 2016-09-15 04:00:00 | AAPL   | 20160915040000272 |      111.51 |
| 2016-09-15 04:01:14 | AAPL   | 20160915040113878 |      111.57 |
| 2016-09-15 04:01:14 | AAPL   …
0
Hello

Configuring shared: What will happen if we lose a node? do we lose the entire base?
• What is happening in case of write conflict? are we able to rebuild the database from a source (Primary storage)?

Thanks

Regards
0

NoSQL Databases

137

Solutions

267

Contributors

A NoSQL database provides a mechanism for storage and retrieval of data which is modeled in means other than the tabular relations used in relational databases. Motivations for this approach include: simplicity of design, simpler "horizontal" scaling to clusters of machines and finer control over availability. The data structures used by NoSQL databases (e.g. key-value, wide column, graph, or document) are specified from those used by default in relational databases, making some operations faster in NoSQL. Sometimes the data structures used by NoSQL databases are also viewed as "more flexible" than relational database tables.

Top Experts In
NoSQL Databases
<
Monthly
>