NoSQL Databases

122

Solutions

240

Contributors

A NoSQL database provides a mechanism for storage and retrieval of data which is modeled in means other than the tabular relations used in relational databases. Motivations for this approach include: simplicity of design, simpler "horizontal" scaling to clusters of machines and finer control over availability. The data structures used by NoSQL databases (e.g. key-value, wide column, graph, or document) are specified from those used by default in relational databases, making some operations faster in NoSQL. Sometimes the data structures used by NoSQL databases are also viewed as "more flexible" than relational database tables.

Share tech news, updates, or what's on your mind.

Sign up to Post

I just installed ElasticSearch, MongoDB, GrayLog2 but getting an error and unable to access the web interface. Please, suggest any solution r

/etc/elasticsearch
# ls -lrtha
drwxr-x---.  2 root elasticsearch    6 Apr 24 16:29 scripts
-rwxr-x---.  1 root elasticsearch 2.6K Apr 24 16:04 logging.yml
-rwxr-x---.  1 root elasticsearch 3.2K Jul 20 05:05 elasticsearch.yml


/var/log/graylog-server/server.log

2017-07-20T07:50:36.694Z INFO  [CmdLineTool] Loaded plugin: Elastic Beats Input 2.2.3 [org.graylog.plugins.beats.BeatsInputPlugin]
2017-07-20T07:50:36.696Z INFO  [CmdLineTool] Loaded plugin: Collector 2.2.3 [org.graylog.plugins.collector.CollectorPlugin]
2017-07-20T07:50:36.699Z INFO  [CmdLineTool] Loaded plugin: Enterprise Integration Plugin 2.2.3 [org.graylog.plugins.enterprise_integration.EnterpriseIntegrationPlugin]
2017-07-20T07:50:36.700Z INFO  [CmdLineTool] Loaded plugin: MapWidgetPlugin 2.2.3 [org.graylog.plugins.map.MapWidgetPlugin]
2017-07-20T07:50:36.708Z INFO  [CmdLineTool] Loaded plugin: Pipeline Processor Plugin 2.2.3 [org.graylog.plugins.pipelineprocessor.ProcessorPlugin]
2017-07-20T07:50:36.709Z INFO  [CmdLineTool] Loaded plugin: Anonymous Usage Statistics 2.2.3 [org.graylog.plugins.usagestatistics.UsageStatsPlugin]
2017-07-20T07:50:36.813Z ERROR [CmdLineTool] Invalid configuration
com.github.joschi.jadconfig.ValidationException: Cannot read file elasticsearch_config_file at path /etc/elasticsearch/elasticsearch.yml. Please specify the correct …
0
Free Tool: Path Explorer
LVL 9
Free Tool: Path Explorer

An intuitive utility to help find the CSS path to UI elements on a webpage. These paths are used frequently in a variety of front-end development and QA automation tasks.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

Hi,

Please bear with me, my Mongo knowledge is minimal and my google foo has failed me after an hour or two.

I have data in Mongo that looks as follows:

{
    "_id" : ObjectId("xxxx"),
    "applicationData" : {
        "groups" : [
            "group1",
                  "group2",
        ],
        "status" : "ok"
    },
    "appid" : "UID0001",
}

{
    "_id" : ObjectId("yyyy"),
    "applicationData" : {
        "groups" : [
                  "group2"
        ],
        "status" : "ok"
    },
    "appid" : "UID0001",
}

From this I'm wanting to group by the content of "groups" and the appid an get a count - the result would be as follows:

group1  UID0001 1
group2  UID0001 2

One step further would be that I'd only like it to report where the "count" is greater than 1

How can I achieve this?

I guess, in SQL, it would be:

select groups, appid, count(*) from db having count(*) > 1

I'm not having much (any) success, further complicated with one of the values I'm wanting to group by is in an Array.

Thanks in advance
0
I am trying to connect to MongoDB in mongoDBatlas from my javascript but I keep getting the following error.

MongoError: connection 5 to isaaccluster-shard-00-02-yng8g.mongodb.net:27017 closed
    at Function.MongoError.create (C:\Users\558642\ga\js-dc-5\11-crud-and-dbs\assignment\todo\node_modules\mongodb-core\lib\error.js:29:11)
    at TLSSocket.<anonymous> (C:\Users\558642\ga\js-dc-5\11-crud-and-dbs\assignment\todo\node_modules\mongodb-core\lib\connection\connection.js:202:22)
    at Object.onceWrapper (events.js:293:19)
    at emitOne (events.js:101:20)
    at TLSSocket.emit (events.js:191:7)
    at _handle.close (net.js:511:12)
    at Socket.done (_tls_wrap.js:332:7)
    at Object.onceWrapper (events.js:293:19)
    at emitOne (events.js:96:13)
    at Socket.emit (events.js:191:7)
    at TCP._handle.close [as _onclose] (net.js:511:12)


Here's my code
const express = require('express')
const hbs = require('express-handlebars')
const mongoose = require('mongoose')
const bodyParser = require('body-parser')

mongoose.connect('mongodb://XXXXXXXXXXXX@isaaccluster-shard-00-00-yng8g.mongodb.net:27017,isaaccluster-shard-00-01-yng8g.mongodb.net:27017,isaaccluster-shard-00-02-yng8g.mongodb.net:27017/<DATABASE>?ssl=true&replicaSet=IsaacCluster-shard-0&authSource=admin')

const itemEntry = require('./models/toDoEntry.js')
const app = express()

app.get('/', function( req, res ) {
	itemEntry.find({}, function( err, itemEntries ) {
			res.render('todoList',

Open in new window

0
I am using php and mongoDb. I am new to mongoDb, but fine with php.  In my mongoDb database I have a  collection called 'users' with a number of fields such as firstname, lastname etc. There are a number of users.


I also have a field/collection in some users called web_links which contains an array.
ie: within my users document:
{
    "_id" : ObjectId("587af11ec09cf31a1955ed92"),
    "username" : "mike",
    "firstname" : "Mike",
    "lastname" : "Tester",
    "email" : "mike@test.com",
    "web_links" : [
        {
            "name" : "google",
            "link" : "https://google.com",
            "status" : "1",
            "added" : ISODate("1970-01-18T08:00:57.600Z")
        },
        {
            "name" : "yahoo",
            "link" : "https://yahoo.com",
            "status" : "1",
            "added" : ISODate("1970-01-18T08:00:57.600Z")
        }
    ]
}

I am trying to firstly update the status of an array item.
I have got as far as this ->

$name = 'google';
$newstatus = '0';

$this->database->users->updateOne(['_id'=>new MongoDB\BSON\ObjectId($userid)],['web_links.name'=>$name],['$set'=>['web_links.$.status'=>$newstatus]]);

This is returning the following error via firebug:
Uncaught exception 'MongoDB\Exception\InvalidArgumentException' with message 'First
 key in $update argument is not an update operator'.

I have tried to change $set to $push, but that is not the answer.

I would be very grateful for assistance …
0
I am learning Angular, Node, npm, deployd, mongodb, etc.  I am making my way through a book titled "Pro AngularJS" by Adam Freeman.

on page 120 I am attempting to prepare the data for a "real world" application called "sportsstore".

I was instructed to install a module called "deployd" which apparently is used for modelling  API's for web applications.

I did that and when I try to start start the "deployd" service I get an error:

C:\PROGRA~2\deployd>dpd -p 5500 sportsstore\app.dpd --mongod
starting deployd v0.8.9...
internal/child_process.js:289
  var err = this._handle.spawn(options);
                         ^

TypeError: Bad argument
    at TypeError (native)
    at ChildProcess.spawn (internal/child_process.js:289:26)
    at exports.spawn (child_process.js:380:9)
    at Object.exports.restart (C:\Users\knowlton\AppData\Roaming\npm\node_modules\deployd\lib\util\mongod.js:38:14)
    at Command.start (C:\Users\knowlton\AppData\Roaming\npm\node_modules\deployd\bin\dpd:149:16)
    at Command.listener (C:\Users\knowlton\AppData\Roaming\npm\node_modules\deployd\node_modules\commander\index.js:301:8)
    at emitOne (events.js:96:13)
    at Command.emit (events.js:188:7)
    at Command.parseArgs (C:\Users\knowlton\AppData\Roaming\npm\node_modules\deployd\node_modules\commander\index.js:617:12)
    at Command.parse (C:\Users\knowlton\AppData\Roaming\npm\node_modules\deployd\node_modules\commander\index.js:458:21)
    at Object.<anonymous> 

Open in new window

0
This post looks at MongoDB and MySQL, and covers high-level MongoDB strengths, weaknesses, features, and uses from the perspective of an SQL user.
0
In this series, we will discuss common questions received as a database Solutions Engineer at Percona. In this role, we speak with a wide array of MySQL and MongoDB users responsible for both extremely large and complex environments to smaller single-server environments.
0
This post contains step-by-step instructions for setting up alerting in Percona Monitoring and Management (PMM) using Grafana.
0
Recently I was talking with Tim Sharp, one of my colleagues from our Technical Account Manager team about MongoDB’s scalability. While doing some quick training with some of the Percona team, Tim brought something to my attention...
0
A simple login application using Node JS, Mongo DB and Express frameworks. There are tons of tutorials on Node JS and Express but most of the use extensive plugins which would confuse the beginners, the primary motive of this article is to ensure it captures the bare minimal functionality.
0
Moving data to the cloud? Find out if you’re ready
LVL 2
Moving data to the cloud? Find out if you’re ready

Before moving to the cloud, it is important to carefully define your db needs, plan for the migration & understand prod. environment. This wp explains how to define what you need from a cloud provider, plan for the migration & what putting a cloud solution into practice entails.

Hi there,

I have a mongo aggregation query that works fine in the mongo shell (and robomongo) but I can not work out how to translate this into a PHP query.

I am using PHP 5.6 with the latest mongo class (MongoDB\Driver\Query).  The mongo query looks like this:

db.products.aggregate(
   [
     {$match: {
             vendor_name : "vendor8",
             distributor_id : 8
         }
     },    
     { $sort: { 
         cw_product_code: 1, download_Date: 1 
         } 
     },
     { $group:
         {
           _id: "$cw_product_id",
           lastDownloadDate: { $last: "$download_Date" },
         }
     }        
   ],
     {allowDiskUse: true}     
)

Open in new window


Any help to point me in the right direction would be appreciated.
0
hi, i have the following document on mongodb

name: "john",
state: "GA",
city: [
     {"atlanta", 30350},
     {"atlanta", 30351},
     {"atlanta", 30352},
     {"marietta", 45093}
]


how do i aggregate the array of citys and get a document like this:

name: "john",
state: "GA",
city: [
     {"atlanta", "30350, 30351, 30352"},
     {"marietta", "45093"}
]
0
I’m looking for a good architecture to be able to efficiently query data currently stored in NoSQL dbs (specifically DocumentDB).

We have a number of microservices that manage various entities (say client, product etc). Each store their data locally (in DocumentDB). We want to create another microservice that provides the ability for real time (latency on the order of seconds) ad-hoc queries over this data.

One option is to replicate all this data and store it in an SQL db, and build the query service on top of it. I expect this would make the queries quite fast, especially if we index all columns. (Of course, since this data keeps changing, we’d listen to a message queue for db updates.)

Is this the best way? How do companies go about building ad-hoc query functionality of NoSQL data? This seems like a problem that many large companies would have to solve. (I am new to NoSQL and microservice architecture, so I might be missing something basic.) Any suggestion/alternatives are appreciated.
0
Hello Experts,

Any advice on how to export data from Apache Cassandra using local windows machine?

Please assist
thank you
0
Is it just me or is NoSQL Overapplied? It seems like in an effort to be as trendy as possible people shoehorn obviously relational problems into non relational schema. If you have to start throwing foreign keys on your objects I think you need to rethink your solution. Sarah Mei makes a good point about this in her article:

http://www.sarahmei.com/blog/2013/11/11/why-you-should-never-use-mongodb/
4
 
LVL 14

Expert Comment

by:Phil Phillips
It was definitely over applied initially. Though, I think it's gotten better over the years. Part of it, too, is that a lot of the relational databases started incorporating NoSQL-like features (i.e. PostgreSQL and their JSON capabilities).
3
4
Hello there,

I am a java developer and am very new to Linux and Cassandra. I am using Ubunto 16.04 and have installed cassandra v3.9.0 from datastax site. Now when i run the bin/cqlsh cmd from the cassandra directory i get error
No appropriate python interpreter found.
.

TO make sure I have python i tried this cmd python -v and then i get this message
The program python ca nbe found in the folowing packages:

Do i have python by default??? Please help.

cheers
Zolf
0
Hi,

I would like to know how to ensure primary key in the dynamo db. is there is any way to do that,,, Any example which can
help us.

roy..
0
Hello  Experts,

I would like to know if the approach below can help us building an  online application where  One can upload Videos and comments from a   registered user.  we have choosen  rdbms like  mysql and no sql database like Mongo db database. In the rdbms we would be adding   Master data  such as registered table, country  And region   which would be static . The transactional data such as uploading the video, comments  etc in the No sql database like mongo db, With this approach are we going to  find a better managing the data or any other approach which yield in better result.. Infrastructure wise
we will be deploying in the AMAZON WEB SERVICE , idea is to have where agility, performance, and scalability reign supreme.



Thanks


Roy,,,
0
Free Tool: Port Scanner
LVL 9
Free Tool: Port Scanner

Check which ports are open to the outside world. Helps make sure that your firewall rules are working as intended.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Hi,

Assuming i have an parent class that I filter on various properties, one of which is a property that is an array of items . Now say that i want to only return the parent item if my array of items as above a min value and below a max value ...that's fine i can work that bit out; What if i then want to then sort on the filtered result set of those items

I made a c# fiddle example to show what im trying to achieve : https://dotnetfiddle.net/mV4d28  (note that foo2 is returned first even though foo1 has items in its array that are less that those in foo2)

As i need to do this using a index i need the index to be able to compute the order by based on the filter criteria used in my query.

I know elasticsearch has an inner hits function that dose this and mongo has pipelines which also dose this so im sure Raven must have a way of doing this too ?

I was hoping using just index and a transform with prams i could achieve this so I tried it:

my index and transform look like this

public class familyTransfrom : AbstractTransformerCreationTask<ParentItem>
{
    public class Result : ParentItem{
        public double[] ChildItemValuesFiltered { get; set; }
    }
    public familyTransfrom(){
        TransformResults = parents => from parent in parents
        let filterMinValue = Convert.ToDouble(ParameterOrDefault("FilterMinValue", Convert.ToDouble(0)).Value<double>())
        let filterMaxValue = 

Open in new window

0
I am currently trying to run a script that I have written however it constantly hangs halfway through, this has worked on the odd occasion but more often than not it gets nowhere. Here is the script:

:: load metadata only

impdp  *username*/******@Server1 exclude=user REMAP_SCHEMA=SC_MUTBLDN:SC_MUTBLDN schemas=SC_MUTBLDN CONTENT=METADATA_ONLY dumpfile=ALL_METADATA_DAILY.dmp logfile=ALL_METADATA_DAILY.log



:: load data only

impdp  *username*/******@Server1  Full=Y EXCLUDE=TABLE:"LIKE'CRAC%'"  dumpfile=ALL_DATA_DAILY.DMP table_exists_action=replace  logfile=ALL_DATA_DAILY.log

impdp  *username*/******@Server1  EXCLUDE=TABLE:"LIKE'CRAC%'"  REMAP_SCHEMA=SC_MUTBLDN:SC_MUTBLDN   dumpfile=ALL_DATA_DAILY.DMP table_exists_action=replace  logfile=ALL_DATA_DAILY.log


:: dump data minus crac tables

impdp *username*/******@Server1 tables=CRAC_TYPE dumpfile=CRAC_TYPE.dmp table_exists_action=replace logfile=CRAC_TYPE.log
impdp *username*/******@Server1 tables=CRAC_KEYWORD dumpfile=CRAC_KEYWORD.dmp table_exists_action=replace logfile=CRAC_KEYWORD.log
impdp *username*/******@Server1 tables=CRAC_DEALTYPE dumpfile=CRAC_DEALTYPE.dmp table_exists_action=replace logfile=CRAC_DEALTYPE.log
impdp *username*/******@Server1 tables=CRAC_TABLENAME dumpfile=CRAC_TABLENAME.dmp table_exists_action=replace logfile=CRAC_TABLENAME.log
impdp *username*/******@Server1 tables=CRAC_DATABASENAME dumpfile=CRAC_DATABASENAME.dmp table_exists_action=replace logfile=CRAC_DATABASENAME.log
0
I have a 3 node datastax cassandra(Community) cluster with huge data. I have few tables which contain 3-5 billion records in them. I want to delete data that is older than 90 days from those tables.

The problem is how do i run a select query which runs without timeout. I am currently running below query

NOW=$(date -d "-3 month" +"%Y-%m-%d")
select day_ts from table_name where minute_ts < '$NOW' LIMIT 100000 ALLOW FILTERING;


Even if i limit the select query result, it will still parse the whole 3-5 billion records and then filter the data.

Please suggest what can be a efficient way to do this.
0
Is MarkLogic v 5 FIPs compatible/enabled? I can't find anything online for this version.
0
I have a financial project which receive real time stock data from some data vendor , save it into mysql database, then retrieve the data and send to the end user browser. The client software provided by the data vendor used to receive stock data is a program written by c/c++ running on the server.  this client can save the data into the mysql database(does not have to be mysql, could be switched to any other database). In order to retrieve the data from the database as quickly as possible, any framework can I use? heard about CES or ESP? spark streaming? any of them can be used for my project?  if not, how can I only retrieve the un-read data from the database as soon as it reach the database? the stock data feed is probably about maxium1000 records(my wild guess, might not be correct)  a second. see the sample below.

+---------------------+--------+-------------------+-------------+
| insertTime                  | symbal | trade_time                 | trade_price |
+---------------------+--------+-------------------+-------------+
| 2016-09-15 04:00:00 | AAPL   | 20160915040000017 |      111.70 |
| 2016-09-15 04:00:00 | AAPL   | 20160915040000017 |      111.70 |
| 2016-09-15 04:00:00 | AAPL   | 20160915040000200 |      111.69 |
| 2016-09-15 04:00:00 | AAPL   | 20160915040000200 |      111.69 |
| 2016-09-15 04:00:00 | AAPL   | 20160915040000272 |      111.51 |
| 2016-09-15 04:01:14 | AAPL   | 20160915040113878 |      111.57 |
| 2016-09-15 04:01:14 | AAPL   …
0
Hello

Configuring shared: What will happen if we lose a node? do we lose the entire base?
• What is happening in case of write conflict? are we able to rebuild the database from a source (Primary storage)?

Thanks

Regards
0

NoSQL Databases

122

Solutions

240

Contributors

A NoSQL database provides a mechanism for storage and retrieval of data which is modeled in means other than the tabular relations used in relational databases. Motivations for this approach include: simplicity of design, simpler "horizontal" scaling to clusters of machines and finer control over availability. The data structures used by NoSQL databases (e.g. key-value, wide column, graph, or document) are specified from those used by default in relational databases, making some operations faster in NoSQL. Sometimes the data structures used by NoSQL databases are also viewed as "more flexible" than relational database tables.

Top Experts In
NoSQL Databases
<
Monthly
>